NEWS IN BRIEF: AI/ML FRESH UPDATES

Get your daily dose of global tech news and stay ahead in the industry! Read more about AI trends and breakthroughs from around the world

Microsoft CTO Stands Firm on LLM Scaling Laws

Microsoft CTO Kevin Scott emphasizes the potential of large language model scaling laws in driving AI progress. Scott played a crucial role in the $13 billion technology-sharing deal between Microsoft and OpenAI, highlighting the impact of scaling up model size and training data on AI capabilities.

Ensuring AI Trustworthiness: Pre-Deployment Assessment

Researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models, like ChatGPT and DALL-E, before deployment. By training a set of slightly different models and assessing consistency, they can rank models based on reliability scores for various tasks.

The Rise of Autonomous Weapons

AI-enabled weapons are on the rise in military use, with companies like Elbit Systems leading the way in developing lethal autonomous drones. The industry is booming as defense companies showcase their advancements in AI technology for combat purposes.

Call for Investigation: NDAs at OpenAI

OpenAI whistleblowers seek investigation into restrictive contracts requiring permission to contact regulators, potentially stifling concerns about the company. Non-disclosure agreements at OpenAI under scrutiny for potential repercussions on employees raising issues with federal authorities.

The Influence of Conspiracy Theories on Politics

Renée DiResta, former Stanford Internet Observatory manager, delves into online propaganda in her new book. She highlights the evolution of propaganda and its impact on society, emphasizing the need for a more accurate diagnosis of the issue.

Unlocking the Secrets of Time Series for LLMs

Foundation models, like Large Language Models (LLMs), are being adapted for time series modeling through Large Time Series Foundation Models (LTSM). By leveraging sequential data similarities, LTSM aims to learn from diverse time series data for tasks like outlier detection and classification, building on the success of LLMs in computational linguistic domains.