B, an open-weight multimodal vision AI model designed to deliver strong math, science, document and UI reasoning with far less training data and compute than much larger systems.
Explore how vision-language-action models like Helix, GR00T N1, and RT-1 are enabling robots to understand instructions and act autonomously.
A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, ...
Scoping review finds large language models can support glaucoma education and decision support, but accuracy and multimodal limits persist.
A monthly overview of things you need to know as an architect or aspiring architect. Unlock the full InfoQ experience by logging in! Stay updated with your favorite authors and topics, engage with ...
Anthropic claims Chinese AI labs ran large-scale Claude distillation attacks to steal data and bypass safeguards.
A major difference between LLMs and LTMs is the type of data they’re able to synthesize and use. LLMs use unstructured data—think text, social media posts, emails, etc. LTMs, on the other hand, can ...
Qwen3.5 comes in an open-weight and hosted API version, with the company advertising improvements in performance and costs from previous versions. Qwen3.5 supports new agentic capabilities and is ...