Alibaba's Qwen team introduces Qwen3.6-35B-A3B, a sparse MoE model with 35B parameters, delivering impressive performance across various benchmarks, including SWE-bench and Terminal-Bench 2.0, showcasing significant advancements in agentic coding and frontend code generation.
AWS Marketing’s TAA team partnered with Gradial to develop an agentic AI solution on Amazon Bedrock, reducing webpage assembly time by over 95%. This innovation streamlines content publishing workflows, allowing marketing teams to focus on creating more impactful customer experiences.
Video semantic search is transforming content delivery across industries by enabling fast, accurate access to specific moments in video. Amazon Nova Multimodal Embeddings offers a unified model that processes text, images, video, and audio into a shared semantic vector space, delivering leading retrieval accuracy and cost efficiency.
Amazon Bedrock now offers granular cost attribution, automatically assigning inference costs to IAM principals like IAM users, roles, or federated identities from providers like Okta. Cost allocation tags allow for easy aggregation by team, project, or custom dimension in AWS Cost Explorer and CUR 2.0, simplifying financial planning and optimization.
MIT Associate Professors Jacob Andreas and Brett McGuire win the 2026 Harold E. Edgerton Faculty Achievement Award for groundbreaking work in natural language processing and astrochemistry. Andreas' innovative research bridges foundational theory with real-world impact in language learning and AI.
Automated Reasoning checks in Amazon Bedrock Guardrails ensure mathematically proven, auditable AI outputs for regulated industries. By using formal verification methods, compliance teams can achieve provably correct results, addressing the limitations of probabilistic AI validation.
Researchers have uncovered the learning dynamics of word2vec, revealing its linear structure and sequential steps. The algorithm's minimal neural model provides insights into feature learning in advanced language tasks.
An encoder maps objects to noiseless images, quantifying how well measurements distinguish objects. AI can extract useful information even when encoded in ways humans cannot interpret, optimizing imaging systems based on their information content.
PLAID, a model that generates protein sequences and structures, reflects AI's role in biology. The model addresses challenges like all-atom generation and organism specificity, aiming to generate useful proteins efficiently.
Training a modern large language model involves pretraining for general language patterns, followed by supervised fine-tuning for specific tasks. Techniques like LoRA and RLHF refine the model, leading to deployment in real-world systems for optimal performance and value delivery.
Researchers from UC San Diego and Together AI introduce Parcae, a looped transformer architecture that outperforms prior models, using the same parameters and training data. Parcae's design addresses memory constraints and enables more compute per forward pass, solving stability issues seen in past looped models.
Understanding complex machine learning systems like Large Language Models (LLMs) is crucial for AI. New algorithms like SPEX and ProxySPEX aim to identify critical interactions at scale by measuring influence through ablation, isolating drivers of decisions with the fewest possible perturbations.
Google DeepMind introduces Gemini Robotics-ER 1.6, an upgrade enhancing robot reasoning capabilities for real-world tasks. The model acts as a high-level strategist, guiding physical actions through advanced spatial reasoning and instrument reading.
Retailers face challenges with online shopping, leading to increased returns and decreased confidence. Implementing virtual try-on technology with Amazon Nova Canvas and Rekognition can boost profitability and customer satisfaction. The AI-powered, serverless retail solution on AWS includes virtual try-on, smart recommendations, smart search, and analytics for a seamless online shopping experie...
ChatGPT shows bias against non-"standard" English varieties, with responses exhibiting stereotypes and condescension. Study prompts GPT-3.5 Turbo and GPT-4 with 10 English varieties, revealing retention of Standard American English features.