AI Regulations

Overview of recent AI industry news including OpenAI staff departures, Sony Music Group's copyright warnings, Scarlett Johansson's voice usage issue, and new developments in ChatGPT search integration.

Last Week in AI: Episode 33

1. Significant Industry Moves

OpenAI Staff Departures and Safety Concerns

Several key staff members responsible for safety at OpenAI have recently left the company. This wave of departures raises questions about the internal dynamics and commitment to AI safety protocols within the organization. The departures could impact OpenAI’s ability to maintain and enforce robust safety measures as it continues to develop advanced AI technologies​​.

For more details, you can read the full article on Gizmodo.

Sony Music Group’s Warning to AI Companies

Sony Music Group has issued warnings to approximately 700 companies for using its content to train AI models without permission. This move highlights the growing tension between content creators and AI developers over intellectual property rights and the use of copyrighted materials in AI training datasets​.

For more details, you can read the full article on NBC News.

Scarlett Johansson’s Voice Usage by OpenAI

Scarlett Johansson revealed that OpenAI approached her to use her voice for their AI models. This incident underscores the ethical and legal considerations surrounding the use of celebrity likenesses in AI applications. Johansson’s stance against the unauthorized use of her voice reflects broader concerns about consent and compensation in the era of AI-generated content.

For more details, you can read the full article on TechCrunch.

ChatGPT’s New Search Product

OpenAI is reportedly working on a stealth search product that could integrate ChatGPT capabilities directly into search engines. This new product aims to enhance the search experience by providing more intuitive and conversational interactions. The development suggests a significant shift in how AI could transform search functionalities in the near future​.

For more details, you can read the full article on Search Engine Land.

2. Ethical Considerations and Policy

Actors’ Class-Action Lawsuit Over Voice Theft

A group of actors has filed a class-action lawsuit against an AI startup, alleging unauthorized use of their voices to train AI models. This lawsuit highlights the ongoing legal battles over voice and likeness rights in the AI industry. The outcome of this case could set a precedent for how AI companies use personal data and celebrity likenesses in their products.

For more details, you can read the full article on The Hollywood Reporter.

Inflection AI’s Vision for the Future

Inflection AI is positioning itself to redefine the future of artificial intelligence. The company aims to create AI systems that are more aligned with human values and ethical considerations. Their approach focuses on transparency, safety, and ensuring that AI benefits all of humanity, reflecting a commitment to responsible AI development.

For more details, you can read the full article on Inflection AI.

Meta’s Introduction of Chameleon

Meta has introduced Chameleon, a state-of-the-art multimodal AI model capable of processing and understanding multiple types of data simultaneously. This new model is designed to improve the integration of various data forms, enhancing the capabilities of AI applications in fields such as computer vision, natural language processing, and beyond.

For more details, you can read the full article on VentureBeat.

Humane’s Potential Acquisition

Humane, a startup known for its AI-driven wearable device, is rumored to be seeking acquisition. The company’s AI Pin product has garnered attention for its innovative approach to personal AI assistants. The potential acquisition indicates a growing interest in integrating advanced AI into consumer technology​.

For more details, you can read the full article on The Verge.

Adobe’s Firefly AI in Lightroom

Adobe has integrated its Firefly AI-powered generative removal tool into Lightroom. This new feature allows users to seamlessly remove unwanted elements from photos using AI, significantly enhancing the photo editing process. The tool demonstrates the practical applications of AI in creative software and the ongoing evolution of digital content creation​.

For more details, you can read the full article on TechCrunch.

Amazon’s AI Overhaul for Alexa

Amazon plans to give Alexa an AI overhaul, introducing a monthly subscription service for advanced features. This update aims to enhance Alexa’s capabilities, making it more responsive and intuitive. The shift to a subscription model reflects Amazon’s strategy to monetize AI advancements and offer premium services to users.

For more details, you can read the full article on CNBC.

3. AI in Practice

Microsoft’s Recall of AI Feature Under Investigation

Microsoft is under investigation in the UK for its recent recall of an AI feature. The investigation will assess whether the recall was handled appropriately and if the feature met safety and regulatory standards. This case highlights the importance of regulatory oversight in the deployment of AI technologies.

For more details, you can read the full article on Mashable.

Near AI Chatbot and Smart Contracts

Near AI has developed a chatbot capable of writing and deploying smart contracts. This innovative application demonstrates the potential of AI in automating complex tasks in the blockchain ecosystem. The chatbot aims to make smart contract development more accessible and efficient for users.

For more details, you can read the full article on Cointelegraph.

Google Search AI Overviews

Google is rolling out AI-generated overviews for search results, designed to provide users with concise summaries of information. This feature leverages Google’s advanced AI to enhance the search experience, offering quick and accurate insights on various topics​.

For more details, you can read the full article on Business Insider.

Meta’s AI Advisory Board

Meta has established an AI advisory board to guide its development and deployment of AI technologies. The board includes experts in AI ethics, policy, and technology, aiming to ensure that Meta’s AI initiatives are aligned with ethical standards and societal needs​.

For more details, you can read the full article on Meta’s Investor Relations.

Stay tuned for more updates next week as we continue to cover the latest developments in AI.

Last Week in AI: Episode 33 Read More »

Last week in AI updates

Last Week in AI

We’re seeing some fascinating developments in AI lately, from new apps and healthcare tools to major shifts in regulation and cybersecurity. Let’s dive into these updates.


OpenAI App Store Launch

OpenAI is about to shake things up by launching a store for GPTs, custom apps built on their AI models like GPT-4. Here’s what’s happening:

  1. GPT Store Launch: This new platform, announced at OpenAI’s DevDay, is set to open soon. It’s a place where developers can list their GPT-based apps.
  2. Rules for Developers: If you’re making a GPT app, you’ve got to follow OpenAI’s latest usage policies and brand guidelines to get your app on the store.
  3. Diverse Applications: These GPTs can do all sorts of things, from specialized Q&As to generating code that follows best practices.

What’s the big deal? Well, OpenAI is moving from just offering AI models to creating a whole ecosystem where others can build and share their AI-powered apps. This could really democratize how generative AI apps are made, though we’re still waiting to see the full impact of this move.


Google’s Fresh Approach to Training LLMs

Google’s DeepMind team is pushing the boundaries in robotics for 2024. They’re working on cool new ways to train robots using videos and big language models. Here’s the lowdown:

  1. Smarter Robots: The goal is to make robots that get what humans want and can adapt better. They’re moving away from robots that just do one thing over and over.
  2. AutoRT System: This new system uses big AI models to control a bunch of robots at once. These robots can work together and handle different tasks by understanding visual and language cues.
  3. RT-Trajectory for Learning: They’ve also got this new method that uses video to teach robots. It’s turning out to be more successful than older ways of training.

Basically, DeepMind is working on making robots more versatile and quick learners. It’s a big step from the robots we’re used to, and it could really change how we think about and use robots in the future.


Microsoft Copilot

Microsoft has been pretty sneaky, launching its Copilot app on Android, iOS, and iPadOS during the holidays. It’s like a portable AI buddy, based on the same tech as OpenAI’s ChatGPT. Here’s the lowdown:

  1. AI-Powered Assistant: Copilot (you might know it as Bing Chat) can help with all sorts of tasks. Drafting emails, summarizing texts, planning trips, and more – just by typing in your questions or instructions.
  2. Creative Boost with DALL·E 3: The app’s got this cool Image Creator feature powered by DALL·E 3. It lets you experiment with different styles, whip up social media posts, design logos, and even visualize storyboards for films and videos.
  3. Popular and Free Access to Advanced AI: It’s a hit! Over 1.5 million downloads across Android and iOS. What’s really neat is it uses the more advanced GPT-4 tech from OpenAI, and it’s free – unlike OpenAI’s GPT app that charges for GPT-4 access.

Microsoft’s move to make Copilot a standalone app, especially after rebranding Bing Chat, shows they’re serious about making AI more accessible and widespread. It’s a big step in bringing advanced AI right into our daily digital lives.


Perplexity AI

Perplexity AI is a new player in the search engine game, but with an AI twist. It’s like a chatbot that lets users ask questions in everyday language and gives back answers with sources. Here’s the scoop:

  1. Chatbot-Style Search: You ask questions, and it replies with summaries and citations, kind of like chatting with a super-smart friend. And you can dig deeper with follow-up questions.
  2. Pro Plan Perks: For those who want more, there’s a Pro plan. It has cool features like image generation, a Copilot for unlimited help, and even lets you upload files for the AI to analyze.
  3. Ambitious AI Goals: Perplexity isn’t stopping at search. They’re rolling out their own GenAI models that use their search data and the web for better performance. This is available to Pro users through an API.

But, with great AI comes great responsibility. There are worries about misuse and misinformation, plus the costs and copyright issues since GenAI models learn from heaps of web content. Despite these challenges, Perplexity has raised a lot of money and boasts 10 million active users each month. It’s definitely a name to watch in the AI search world!


AI Regulations

In 2024, there’s more action on AI rules globally. Last year saw big steps in setting these up. Now, countries like the U.S., the European Union, and China are each crafting their own AI laws, and other regions are joining in with their approaches to AI and its effects.

Three key takeaways:

  1. The US, EU, and China each have their unique strategies for AI regulations, reflecting their influence in the AI sector.
  2. These upcoming regulations will significantly impact companies, especially those in AI.
  3. It’s not just about tech; these rules are shaping international politics and relationships.

In short, AI regulation is evolving rapidly, making a notable impact on businesses and global politics. It’s a crucial area to watch for anyone interested in the future of AI and its governance.


AI Cybersecurity

AI trends are really shaping up, especially in cybersecurity. Last year, generative AI was a big deal, and it’s going to have an even bigger impact this year. Here’s what’s going on:

Key points:

  1. AI’s use, misuse, and importance in cybersecurity is a hot topic. Think of stuff like cyberattacks and data insecurity.
  2. Experts are talking about both the challenges and opportunities AI brings, like its role in detecting threats or creating malware.
  3. There’s a big focus on how AI might be misused for things like deep fakes and spreading false info.

In essence, AI is really changing the game in cybersecurity, with lots of potential for good and bad. It’s crucial for organizations to stay alert and understand how to handle these AI tools.


Data Ownership

The big thing in tech right now is all about who owns and controls data. We’re moving from a world where personal data was used freely to one where privacy and even data ownership rights are taking center stage. Think of it like data becoming the new “oil” for AI.

Here’s what’s happening:

  1. Laws like the GDPR kicked off this trend. Now, places like Brazil are also getting serious about data privacy and investing in regulations.
  2. This change is cutting down on the free-for-all use of personal data. Instead, we’re seeing new systems that give people more control over their data.
  3. Big names like Apple’s CEO, Tim Cook, are pushing for these changes, focusing on protecting and empowering consumers.

So, what’s the bottom line? Data ownership is becoming a huge deal in tech. It’s not just about privacy anymore; it’s about giving people a say in how their data is used, which is a game-changer for everyone in the data economy.


Investing in AI

In 2024, AI investing looks like it’s moving beyond just hype. Investors are keen on funding AI startups and are expecting this trend to keep up. But now, there’s a shift towards more sustainable, focused businesses in AI.

Here’s the scoop:

  1. We’re anticipating a new wave of AI startups. These aren’t just building on tech from giants like OpenAI or Google, but are more specialized and sector-specific.
  2. Investors like Lisa Wu from Norwest Venture Partners see big potential in these specialized AI businesses. They’re seen as safer bets because they’re not easy for big companies to just replicate.
  3. These startups are all about knowing their specific users and using AI to boost productivity. For example, law firms are using AI to work more efficiently and get better results at lower costs.

In short, AI investing is maturing. It’s less about general hype and more about creating targeted solutions that really understand and improve specific industries.


AI in Healthcare

Nabla, a Paris-based startup, is making big moves in healthcare with its AI doctor’s assistant. They’ve just bagged $24 million in Series B funding, and here’s why they’re a game-changer:

  1. Revolutionizing Medical Documentation: Nabla’s AI helps doctors by transcribing conversations, highlighting important info, and creating medical reports quickly. It’s all about boosting doctors’ efficiency, not replacing them.
  2. Focus on Data Processing: They put privacy first. No storing audio or notes without clear consent. Plus, they’re keen on accuracy, allowing doctors to share notes for transcription error correction.
  3. Impact and Future Goals: This AI tool is already helping thousands of doctors in the U.S., especially with the Permanente Medical Group. Nabla aims for FDA approval and wants to keep doctors at the heart of healthcare.

In short, Nabla’s AI is here to assist doctors, not take over their jobs. With this new funding, they’re set to transform how doctors use technology, all while maintaining strict privacy standards. It’s an exciting step forward for AI in healthcare. 🚀💡🏥


Final Thoughts

In the AI world, big things are happening! OpenAI’s new store, Google’s smart robots, Microsoft’s Copilot app, and Perplexity AI’s search engine are shaking things up. Plus, AI’s role in healthcare, data ownership, and global regulations are evolving fast. It’s a thrilling time for AI, with major changes and innovations all around! 🌐💡🤖

Last Week in AI Read More »