AI Safety and Regulation

Overview of recent AI industry news including OpenAI staff departures, Sony Music Group's copyright warnings, Scarlett Johansson's voice usage issue, and new developments in ChatGPT search integration.

Last Week in AI: Episode 33

1. Significant Industry Moves

OpenAI Staff Departures and Safety Concerns

Several key staff members responsible for safety at OpenAI have recently left the company. This wave of departures raises questions about the internal dynamics and commitment to AI safety protocols within the organization. The departures could impact OpenAI’s ability to maintain and enforce robust safety measures as it continues to develop advanced AI technologies​​.

For more details, you can read the full article on Gizmodo.

Sony Music Group’s Warning to AI Companies

Sony Music Group has issued warnings to approximately 700 companies for using its content to train AI models without permission. This move highlights the growing tension between content creators and AI developers over intellectual property rights and the use of copyrighted materials in AI training datasets​.

For more details, you can read the full article on NBC News.

Scarlett Johansson’s Voice Usage by OpenAI

Scarlett Johansson revealed that OpenAI approached her to use her voice for their AI models. This incident underscores the ethical and legal considerations surrounding the use of celebrity likenesses in AI applications. Johansson’s stance against the unauthorized use of her voice reflects broader concerns about consent and compensation in the era of AI-generated content.

For more details, you can read the full article on TechCrunch.

ChatGPT’s New Search Product

OpenAI is reportedly working on a stealth search product that could integrate ChatGPT capabilities directly into search engines. This new product aims to enhance the search experience by providing more intuitive and conversational interactions. The development suggests a significant shift in how AI could transform search functionalities in the near future​.

For more details, you can read the full article on Search Engine Land.

2. Ethical Considerations and Policy

Actors’ Class-Action Lawsuit Over Voice Theft

A group of actors has filed a class-action lawsuit against an AI startup, alleging unauthorized use of their voices to train AI models. This lawsuit highlights the ongoing legal battles over voice and likeness rights in the AI industry. The outcome of this case could set a precedent for how AI companies use personal data and celebrity likenesses in their products.

For more details, you can read the full article on The Hollywood Reporter.

Inflection AI’s Vision for the Future

Inflection AI is positioning itself to redefine the future of artificial intelligence. The company aims to create AI systems that are more aligned with human values and ethical considerations. Their approach focuses on transparency, safety, and ensuring that AI benefits all of humanity, reflecting a commitment to responsible AI development.

For more details, you can read the full article on Inflection AI.

Meta’s Introduction of Chameleon

Meta has introduced Chameleon, a state-of-the-art multimodal AI model capable of processing and understanding multiple types of data simultaneously. This new model is designed to improve the integration of various data forms, enhancing the capabilities of AI applications in fields such as computer vision, natural language processing, and beyond.

For more details, you can read the full article on VentureBeat.

Humane’s Potential Acquisition

Humane, a startup known for its AI-driven wearable device, is rumored to be seeking acquisition. The company’s AI Pin product has garnered attention for its innovative approach to personal AI assistants. The potential acquisition indicates a growing interest in integrating advanced AI into consumer technology​.

For more details, you can read the full article on The Verge.

Adobe’s Firefly AI in Lightroom

Adobe has integrated its Firefly AI-powered generative removal tool into Lightroom. This new feature allows users to seamlessly remove unwanted elements from photos using AI, significantly enhancing the photo editing process. The tool demonstrates the practical applications of AI in creative software and the ongoing evolution of digital content creation​.

For more details, you can read the full article on TechCrunch.

Amazon’s AI Overhaul for Alexa

Amazon plans to give Alexa an AI overhaul, introducing a monthly subscription service for advanced features. This update aims to enhance Alexa’s capabilities, making it more responsive and intuitive. The shift to a subscription model reflects Amazon’s strategy to monetize AI advancements and offer premium services to users.

For more details, you can read the full article on CNBC.

3. AI in Practice

Microsoft’s Recall of AI Feature Under Investigation

Microsoft is under investigation in the UK for its recent recall of an AI feature. The investigation will assess whether the recall was handled appropriately and if the feature met safety and regulatory standards. This case highlights the importance of regulatory oversight in the deployment of AI technologies.

For more details, you can read the full article on Mashable.

Near AI Chatbot and Smart Contracts

Near AI has developed a chatbot capable of writing and deploying smart contracts. This innovative application demonstrates the potential of AI in automating complex tasks in the blockchain ecosystem. The chatbot aims to make smart contract development more accessible and efficient for users.

For more details, you can read the full article on Cointelegraph.

Google Search AI Overviews

Google is rolling out AI-generated overviews for search results, designed to provide users with concise summaries of information. This feature leverages Google’s advanced AI to enhance the search experience, offering quick and accurate insights on various topics​.

For more details, you can read the full article on Business Insider.

Meta’s AI Advisory Board

Meta has established an AI advisory board to guide its development and deployment of AI technologies. The board includes experts in AI ethics, policy, and technology, aiming to ensure that Meta’s AI initiatives are aligned with ethical standards and societal needs​.

For more details, you can read the full article on Meta’s Investor Relations.

Stay tuned for more updates next week as we continue to cover the latest developments in AI.

Last Week in AI: Episode 33 Read More »

"Last Week in AI" including OpenAI, Stack Overflow, Apple's new Photos app, YouTube Premium, Microsoft MAI-1, Eli Lilly, Audible, Apple's M4 chip, Google's Pixel 8a, machine learning in whale communication, and more.

Last Week in AI: Episode 31

Hey everyone, welcome to this week’s edition of “Last Week in AI.” This week’s stories provide a glimpse into how AI is reshaping industries and our daily lives. Let’s dive in and explore these fascinating developments together.

OpenAI and Stack Overflow Partnership

Partnership Announcement: OpenAI and Stack Overflow have formed a new API partnership to leverage their collective strengths—Stack Overflow’s technical knowledge platform and OpenAI’s language models.

Impact and Controversy: This partnership aims to empower developers by combining high-quality technical content with advanced AI models. However, some Stack Overflow users have protested, arguing it exploits their contributed labor without consent, leading to bans and post reverts by staff. This raises questions about content creator attribution and future model training, despite the potential for improved AI models. Read more

Apple’s New Photos App Feature

Feature Introduction: Apple is set to introduce a “Clean Up” feature in its Photos app update, leveraging generative AI for advanced image editing. This tool will allow users to remove objects from photos using a brush tool, similar to Adobe’s Content-Aware Fill.

Preview and Positioning: Currently in testing on macOS 15, Apple may preview this feature during the “Let Loose” iPad event on March 18, 2023. This positions the new iPads as AI-equipped devices, showcasing practical AI applications beyond chatbots and entertainment. Read more

YouTube Premium’s AI “Jump Ahead” Feature

Feature Testing: YouTube Premium subscribers can now test an AI-powered “Jump ahead” feature, allowing them to skip commonly skipped video sections. By double-tapping to skip, users can jump to the point where most viewers typically resume watching.

Availability and Aim: This feature is currently available on the YouTube Android app in the US for English videos and requires a Premium subscription. It complements YouTube’s “Ask” feature and aims to enhance the viewing experience by leveraging AI and user data. Read more

Microsoft’s MAI-1 Language Model Development

Model Development: Microsoft is developing a new large-scale AI language model, MAI-1, led by Mustafa Suleyman, the former CEO of Inflection AI. MAI-1 will have approximately 500 billion parameters, significantly larger than Microsoft’s previous models.

Strategic Significance: This development signifies Microsoft’s dual approach to AI, focusing on both small and large models. Despite its investment in OpenAI, Microsoft is independently advancing its AI capabilities, with plans to unveil MAI-1 at their Build conference. Read more

AI in Drug Discovery at Eli Lilly

Innovative Discovery: The pharmaceutical industry is integrating AI into drug discovery, with Eli Lilly scientists noting innovative molecular designs generated by AI. This marks a precedent in AI-driven biology breakthroughs.

Industry Impact: AI is expected to propose new drugs and generate designs beyond human capability. This integration promises faster development times, higher success rates, and exploration of new targets, reshaping drug discovery. Read more

AI-Narrated Audiobooks on Audible

Audiobook Trends: Over 40,000 AI-voiced titles have been added to Audible since Amazon launched a tool for self-published authors to generate AI narrations. This makes audiobook creation more accessible but has sparked controversy.

Industry Reaction: Some listeners dislike the lack of filters to exclude AI narrations, and human narrators fear job losses. Major publishers are embracing AI for cost savings, highlighting tensions between creative integrity and commercial incentives. Read more

Apple’s M4 Chip for iPad Pro

Processor Introduction: Apple’s M4 chip, the latest and most powerful processor for the new iPad Pro, offers groundbreaking performance and efficiency.

Key Innovations: The M4 chip features a 10-core CPU, 10-core GPU, advanced AI capabilities, and power efficiency gains. These innovations enable superior graphics, real-time AI features, and all-day battery life. Read more

Google’s Pixel 8a Smartphone

Affordable Innovation: The Pixel 8a, Google’s latest affordable smartphone, is priced at $499 and packed with AI-powered features and impressive camera capabilities.

Key Highlights: The Pixel 8a features a refined design, dual rear camera, AI tools, and enhanced security. It also offers family-friendly features and 7 years of software support. Read more

OpenAI’s Media Manager Tool

Tool Development: OpenAI is building a Media Manager tool to help creators manage how their works are included in AI training data. This system aims to identify copyrighted material across sources.

AI Training Approach: OpenAI uses diverse public datasets and proprietary data to train its models, collaborating with creators, publishers, and regulators to support healthy ecosystems and respect intellectual property. Read more

Machine Learning in Sperm Whale Communication

Breakthrough Discovery: MIT CSAIL and Project CETI researchers have discovered a combinatorial coding system in sperm whale vocalizations, akin to a phonetic alphabet, using machine learning techniques.

Communication Insights: By analyzing a large dataset of whale codas, researchers identified patterns and structures, suggesting a complex communication system previously thought unique to humans. This finding opens new avenues for studying cetacean communication. Read more

Sam Altman’s Concerns About AI’s Economic Impact

CEO’s Warning: Sam Altman, CEO of OpenAI, has expressed significant concerns about AI’s potential impact on the labor market and economy, particularly job disruptions and economic changes.

Economic Threat: Studies suggest AI could automate up to 60% of jobs in advanced economies, leading to job losses and lower wages. Altman emphasizes the need to address these concerns proactively. Read more

AI Lecturers at Hong Kong University

Educational Innovation: HKUST is testing AI-generated virtual lecturers, including an AI version of Albert Einstein, to transform teaching methods and engage students.

Teaching Enhancement: AI lecturers aim to address teacher shortages and enhance learning experiences. While students find them approachable, some prefer human teachers for unique experiences. Read more

OpenAI’s NSFW Content Proposal

Content Policy Debate: OpenAI is considering allowing users to generate NSFW content, including erotica and explicit images, using its AI tools like ChatGPT and DALL-E. This proposal has sparked controversy.

Ethical Concerns: Critics argue it contradicts OpenAI’s mission of developing “safe and beneficial” AI. OpenAI acknowledges potential valid use cases but emphasizes responsible generation within appropriate contexts. Read more

Bumble’s Vision for AI in Dating

Future of Dating: Bumble founder Whitney Wolfe Herd envisions AI “dating concierges” streamlining the matching process by essentially going on dates to find compatible matches for users.

AI Assistance: These AI assistants could also provide dating coaching and advice. Despite concerns about AI companions forming unhealthy bonds, Bumble’s focus remains on fostering healthy relationships. Read more

Final Thoughts

This week’s updates showcase AI’s transformative power in areas like education, healthcare, and digital content creation. However, they also raise critical questions about ethics, job displacement, and intellectual property. As we look to the future, it’s essential to balance innovation with responsibility, ensuring AI advancements benefit society as a whole. Thanks for joining us, and stay tuned for more insights and updates in next week’s edition of “Last Week in AI.”

Last Week in AI: Episode 31 Read More »

Explore the latest AI advancements and industry impacts, featuring new technologies from Meta, NVIDIA, Groq and more.

Last Week in AI: Episode 28

Welcome to another edition of Last Week in AI, where we dive into the latest advancements and partnerships shaping the future of technology. This week, Meta unveiled their new AI model, Llama 3, which brings enhanced capabilities to developers and businesses. With support from NVIDIA for broader accessibility and Groq offering faster, cost-effective versions, Llama 3 is set to make significant impacts across various platforms and much more. Let’s dive in!

Meta Releases Llama 3

Meta has released Llama 3 with enhanced capabilities and performance across diverse benchmarks.

Key Takeaways:

  • Enhanced Performance: Llama 3 offers 8B and 70B parameter models, showcasing top-tier results with advanced reasoning abilities.
  • Extensive Training Data: The models were trained on 15 trillion tokens, including a significant increase in code and non-English data.
  • Efficient Training Techniques: Utilizing 24,000 GPUs, Meta employed scaling strategies like data, model, and pipeline parallelization for effective training.
  • Improved Alignment and Safety: Supervised fine-tuning techniques and policy optimization were used to enhance the models’ alignment with ethical guidelines and safety.
  • New Safety Tools: Meta introduces tools like Llama Guard 2 and CyberSecEval 2 to aid developers in responsible deployment.
  • Broad Availability: Llama 3 will be accessible on major cloud platforms and integrated into Meta’s AI assistant, expanding its usability.

Why It Matters

With Llama 3, Meta is pushing the boundaries of language model capabilities, offering accessible AI tools that promise to transform how developers and businesses leverage AI technology.


NVIDIA Boosts Meta’s Llama 3 AI Model Performance Across Platforms

NVIDIA is playing a pivotal role in enhancing the performance and accessibility of Meta’s Llama 3 across various computing environments.

Key Takeaways:

  • Extensive GPU Utilization: Meta’s Llama 3 was initially trained using 24,576 NVIDIA H100 Tensor Core GPUs. Meta plans to expand to 350,000 GPUs.
  • Versatile Availability: Accelerated versions of Llama 3 are now accessible on multiple platforms.
  • Commitment to Open AI: NVIDIA continues to refine community software and open-source models, ensuring AI development remains transparent and secure.

Why It Matters

NVIDIA’s comprehensive support and advancements are crucial in scaling Llama 3’s deployment across diverse platforms, making powerful AI tools more accessible and efficient. This collaboration underscores NVIDIA’s commitment to driving innovation and transparency in the AI sector.


Groq Launches High-Speed Llama 3 Models

Groq has introduced its implementation of Meta’s Llama 3 LLM, boasting significantly enhanced performance and attractive pricing.

Key Takeaways:

  • New Releases: Groq has deployed Llama 3 8B and 70B models on its LPU™ Inference Engine.
  • Exceptional Speed: The Llama 3 70B model by Groq achieves 284 tokens per second, marking a 3-11x faster throughput than competitors.
  • Cost-Effective Pricing: Groq offers Llama 3 70B at $0.59 per 1M tokens for input and $0.79 per 1M tokens for output.
  • Community Engagement: Groq encourages developers to share feedback, applications, and performance comparisons.

Why It Matters

Groq’s rapid and cost-efficient Llama 3 implementations represent a significant advancement in the accessibility and performance of large language models, potentially transforming how developers interact with AI technologies in real-time applications.


DeepMind CEO Foresees Over $100 Billion Google Investment in AI

Demis Hassabis, CEO of DeepMind, predicts Google will invest heavily in AI, exceeding $100 billion over time.

Key Takeaways:

  • Advanced Hardware: Google is developing Axion CPUs, boasting 30% faster processing and 60% more efficiency than traditional Intel and AMD processors.
  • DeepMind’s Focus: The investment will support DeepMind’s software development in AI.
  • Mixed Research Outcomes: Some of DeepMind’s projects, like AI-driven material discovery and weather forecasting, haven’t met expectations.
  • High Compute Needs: These AI goals require significant computational power, a key reason for its collaboration with Google since 2014.

Why It Matters

Google’s commitment to funding AI indicates its long-term strategy to lead in technology innovation. The investment in DeepMind underscores the potential of AI to drive future advancements across various sectors.


Stability AI Launches Stable Diffusion 3 with Enhanced Features

Stability AI has released Stable Diffusion 3 and its Turbo version on their Developer Platform API, marking significant advancements in text-to-image technology.

Key Takeaways:

  • Enhanced Performance: Stable Diffusion 3 surpasses competitors like DALL-E 3 and Midjourney v6, excelling in typography and prompt adherence.
  • Improved Architecture: The new Multimodal Diffusion Transformer (MMDiT) boosts text comprehension and spelling over prior versions.
  • Reliable API Service: In partnership with Fireworks AI, Stability AI ensures 99.9% service availability, targeting enterprise applications.
  • Commitment to Ethics: Stability AI focuses on safe, responsible AI development, engaging experts to prevent misuse.
  • Membership Benefits: Model weights for Stable Diffusion 3 will soon be available for self-hosting to members.

Why It Matters

The release of Stable Diffusion 3 positions Stability AI at the forefront of AI-driven image generation, offering superior performance and reliability for developers and enterprises.


Introducing VASA-1: Next-Gen Real-Time Talking Faces

VASA’s new model, VASA-1, creates realistic talking faces from images and audio. It features precise lip syncing, dynamic facial expressions, and natural head movements, all generated in real-time.

Key Features:

  • Realism and Liveliness: Syncs lips perfectly with audio. Captures a broad range of expressions and head movements.
  • Controllability: Adjusts eye gaze, head distance, and emotions.
  • Generalization: Handles various photo and audio types, including artistic and non-English inputs.
  • Disentanglement: Separates appearance, head pose, and facial movements for detailed editing.
  • Efficiency: Generates 512×512 videos at up to 45fps offline and 40fps online with low latency.

Why It Matters

VASA-1 revolutionizes digital interactions, enabling real-time creation of lifelike avatars for immersive communication and media.


Adobe Enhances Premiere Pro with New AI-Powered Editing Features

Adobe has announced AI-driven features for Premiere Pro, aimed at simplifying video editing tasks. These updates, powered by Adobe’s AI model Firefly, are scheduled for release later this year.

Key Features:

  • Generative Extend: Uses AI to create additional video frames, helping editors achieve perfect timing and smoother transitions.
  • Object Addition & Removal: Easily add or remove objects within video frames, such as altering backgrounds or modifying an actor’s apparel.
  • Text to Video: Generate new footage directly in Premiere Pro using text prompts or reference images, ideal for storyboarding or supplementing primary footage.
  • Custom AI Model Integration: Premiere Pro will support custom AI models like Pika and OpenAI’s Sora for specific tasks like extending clips and creating B-roll.
  • Content Credentials: New footage will include details about the AI used in its creation, ensuring transparency about the source and method of generation.

Why It Matters

These advancements in Premiere Pro demonstrate Adobe’s commitment to integrating AI technology to streamline video production, offering creative professionals powerful tools to improve efficiency and expand creative possibilities.


Intel Launches Hala Point, the World’s Largest Neuromorphic Computer

Intel has introduced Hala Point, the world’s most extensive neuromorphic computer, equipped with 1.15 billion artificial neurons and 1152 Loihi 2 chips, marking a significant milestone in computing that simulates the human brain.

Key Features:

  • Massive Scale: Hala Point features 1.15 billion neurons capable of executing 380 trillion synaptic operations per second.
  • Brain-like Computing: This system mimics brain functions by integrating computation and data storage within neurons.
  • Engineering Challenges: Despite its advanced hardware, adapting real-world applications to neuromorphic formats and training models pose substantial challenges.
  • Potential for AGI: Experts believe neuromorphic computing could advance efforts towards artificial general intelligence, though challenges in continuous learning persist.

Why It Matters

Hala Point’s development offers potential new solutions for complex computational problems and moving closer to the functionality of the human brain in silicon form. This may lead to more efficient AI systems capable of learning and adapting in ways that are more akin to human cognition.


AI-Controlled Fighter Jet Successfully Tests Against Human Pilot

The US Air Force, in collaboration with DARPA’s Air Combat Evolution (ACE) program, has conducted a successful test of an AI-controlled fighter jet in a dogfight scenario against a human pilot.

Key Points:

  • Test Details: The AI piloted an X-62A experimental aircraft against a human-operated F-16 at Edwards Air Force Base in September 2023.
  • Maneuverability: The AI demonstrated advanced flying capabilities, executing close-range, high-speed maneuvers with the human pilot.
  • Ongoing Testing: This test is part of a series, with DARPA planning to continue through 2024, totaling 21 flights to date.
  • Military Applications: The test underscores significant progress in AI for potential use in military aircraft and autonomous defense systems.

Why It Matters

This development highlights the growing role of AI in enhancing combat and defense capabilities, potentially leading to more autonomous operations and strategic advantages in military aerospace technology.


AI Continues to Excel Humans Across Multiple Benchmarks

Recent findings indicate that AI has significantly outperformed humans in various benchmarks such as image classification and natural language inference, with AI models like GPT-4 showing remarkable proficiency even in complex cognitive tasks.

Key Points:

  • AI Performance: AI has now surpassed human capabilities in many traditional performance benchmarks, rendering some measures obsolete due to AI’s advanced skills.
  • Complex Tasks: While AI still faces challenges with tasks like advanced math, progress is notable—GPT-4 solved 84.3% of difficult math problems in a test set.
  • Accuracy Issues: Despite advancements, AI models are still susceptible to generating incorrect or misleading information, known as “hallucinations.”
  • Improvements in Truthfulness: GPT-4 has shown significant improvements in generating accurate information, scoring 0.59 on the TruthfulQA benchmark, a substantial increase over earlier models.
  • Advances in Visual AI: Text-to-image AI has made strides in creating high-quality, realistic images faster than human artists.
  • Future Prospects: Expectations for 2024 include the potential release of even more sophisticated AI models like GPT-5, which could revolutionize various industries.

Why It Matters

These developments highlight the rapid pace of AI innovation, which is not only enhancing its problem-solving capabilities but also reshaping industry standards and expectations for technology’s role in society.


Final Thoughts

As these tools become more sophisticated and available, they are poised to revolutionize industries by making complex tasks simpler and more efficient. This ongoing evolution in AI technology promises to change in how we approach and solve real-world problems.

Last Week in AI: Episode 28 Read More »