AI and Legislation

Overview of recent AI industry news including OpenAI staff departures, Sony Music Group's copyright warnings, Scarlett Johansson's voice usage issue, and new developments in ChatGPT search integration.

Last Week in AI: Episode 33

1. Significant Industry Moves

OpenAI Staff Departures and Safety Concerns

Several key staff members responsible for safety at OpenAI have recently left the company. This wave of departures raises questions about the internal dynamics and commitment to AI safety protocols within the organization. The departures could impact OpenAI’s ability to maintain and enforce robust safety measures as it continues to develop advanced AI technologies​​.

For more details, you can read the full article on Gizmodo.

Sony Music Group’s Warning to AI Companies

Sony Music Group has issued warnings to approximately 700 companies for using its content to train AI models without permission. This move highlights the growing tension between content creators and AI developers over intellectual property rights and the use of copyrighted materials in AI training datasets​.

For more details, you can read the full article on NBC News.

Scarlett Johansson’s Voice Usage by OpenAI

Scarlett Johansson revealed that OpenAI approached her to use her voice for their AI models. This incident underscores the ethical and legal considerations surrounding the use of celebrity likenesses in AI applications. Johansson’s stance against the unauthorized use of her voice reflects broader concerns about consent and compensation in the era of AI-generated content.

For more details, you can read the full article on TechCrunch.

ChatGPT’s New Search Product

OpenAI is reportedly working on a stealth search product that could integrate ChatGPT capabilities directly into search engines. This new product aims to enhance the search experience by providing more intuitive and conversational interactions. The development suggests a significant shift in how AI could transform search functionalities in the near future​.

For more details, you can read the full article on Search Engine Land.

2. Ethical Considerations and Policy

Actors’ Class-Action Lawsuit Over Voice Theft

A group of actors has filed a class-action lawsuit against an AI startup, alleging unauthorized use of their voices to train AI models. This lawsuit highlights the ongoing legal battles over voice and likeness rights in the AI industry. The outcome of this case could set a precedent for how AI companies use personal data and celebrity likenesses in their products.

For more details, you can read the full article on The Hollywood Reporter.

Inflection AI’s Vision for the Future

Inflection AI is positioning itself to redefine the future of artificial intelligence. The company aims to create AI systems that are more aligned with human values and ethical considerations. Their approach focuses on transparency, safety, and ensuring that AI benefits all of humanity, reflecting a commitment to responsible AI development.

For more details, you can read the full article on Inflection AI.

Meta’s Introduction of Chameleon

Meta has introduced Chameleon, a state-of-the-art multimodal AI model capable of processing and understanding multiple types of data simultaneously. This new model is designed to improve the integration of various data forms, enhancing the capabilities of AI applications in fields such as computer vision, natural language processing, and beyond.

For more details, you can read the full article on VentureBeat.

Humane’s Potential Acquisition

Humane, a startup known for its AI-driven wearable device, is rumored to be seeking acquisition. The company’s AI Pin product has garnered attention for its innovative approach to personal AI assistants. The potential acquisition indicates a growing interest in integrating advanced AI into consumer technology​.

For more details, you can read the full article on The Verge.

Adobe’s Firefly AI in Lightroom

Adobe has integrated its Firefly AI-powered generative removal tool into Lightroom. This new feature allows users to seamlessly remove unwanted elements from photos using AI, significantly enhancing the photo editing process. The tool demonstrates the practical applications of AI in creative software and the ongoing evolution of digital content creation​.

For more details, you can read the full article on TechCrunch.

Amazon’s AI Overhaul for Alexa

Amazon plans to give Alexa an AI overhaul, introducing a monthly subscription service for advanced features. This update aims to enhance Alexa’s capabilities, making it more responsive and intuitive. The shift to a subscription model reflects Amazon’s strategy to monetize AI advancements and offer premium services to users.

For more details, you can read the full article on CNBC.

3. AI in Practice

Microsoft’s Recall of AI Feature Under Investigation

Microsoft is under investigation in the UK for its recent recall of an AI feature. The investigation will assess whether the recall was handled appropriately and if the feature met safety and regulatory standards. This case highlights the importance of regulatory oversight in the deployment of AI technologies.

For more details, you can read the full article on Mashable.

Near AI Chatbot and Smart Contracts

Near AI has developed a chatbot capable of writing and deploying smart contracts. This innovative application demonstrates the potential of AI in automating complex tasks in the blockchain ecosystem. The chatbot aims to make smart contract development more accessible and efficient for users.

For more details, you can read the full article on Cointelegraph.

Google Search AI Overviews

Google is rolling out AI-generated overviews for search results, designed to provide users with concise summaries of information. This feature leverages Google’s advanced AI to enhance the search experience, offering quick and accurate insights on various topics​.

For more details, you can read the full article on Business Insider.

Meta’s AI Advisory Board

Meta has established an AI advisory board to guide its development and deployment of AI technologies. The board includes experts in AI ethics, policy, and technology, aiming to ensure that Meta’s AI initiatives are aligned with ethical standards and societal needs​.

For more details, you can read the full article on Meta’s Investor Relations.

Stay tuned for more updates next week as we continue to cover the latest developments in AI.

Last Week in AI: Episode 33 Read More »

"Last Week in AI" including OpenAI, Stack Overflow, Apple's new Photos app, YouTube Premium, Microsoft MAI-1, Eli Lilly, Audible, Apple's M4 chip, Google's Pixel 8a, machine learning in whale communication, and more.

Last Week in AI: Episode 31

Hey everyone, welcome to this week’s edition of “Last Week in AI.” This week’s stories provide a glimpse into how AI is reshaping industries and our daily lives. Let’s dive in and explore these fascinating developments together.

OpenAI and Stack Overflow Partnership

Partnership Announcement: OpenAI and Stack Overflow have formed a new API partnership to leverage their collective strengths—Stack Overflow’s technical knowledge platform and OpenAI’s language models.

Impact and Controversy: This partnership aims to empower developers by combining high-quality technical content with advanced AI models. However, some Stack Overflow users have protested, arguing it exploits their contributed labor without consent, leading to bans and post reverts by staff. This raises questions about content creator attribution and future model training, despite the potential for improved AI models. Read more

Apple’s New Photos App Feature

Feature Introduction: Apple is set to introduce a “Clean Up” feature in its Photos app update, leveraging generative AI for advanced image editing. This tool will allow users to remove objects from photos using a brush tool, similar to Adobe’s Content-Aware Fill.

Preview and Positioning: Currently in testing on macOS 15, Apple may preview this feature during the “Let Loose” iPad event on March 18, 2023. This positions the new iPads as AI-equipped devices, showcasing practical AI applications beyond chatbots and entertainment. Read more

YouTube Premium’s AI “Jump Ahead” Feature

Feature Testing: YouTube Premium subscribers can now test an AI-powered “Jump ahead” feature, allowing them to skip commonly skipped video sections. By double-tapping to skip, users can jump to the point where most viewers typically resume watching.

Availability and Aim: This feature is currently available on the YouTube Android app in the US for English videos and requires a Premium subscription. It complements YouTube’s “Ask” feature and aims to enhance the viewing experience by leveraging AI and user data. Read more

Microsoft’s MAI-1 Language Model Development

Model Development: Microsoft is developing a new large-scale AI language model, MAI-1, led by Mustafa Suleyman, the former CEO of Inflection AI. MAI-1 will have approximately 500 billion parameters, significantly larger than Microsoft’s previous models.

Strategic Significance: This development signifies Microsoft’s dual approach to AI, focusing on both small and large models. Despite its investment in OpenAI, Microsoft is independently advancing its AI capabilities, with plans to unveil MAI-1 at their Build conference. Read more

AI in Drug Discovery at Eli Lilly

Innovative Discovery: The pharmaceutical industry is integrating AI into drug discovery, with Eli Lilly scientists noting innovative molecular designs generated by AI. This marks a precedent in AI-driven biology breakthroughs.

Industry Impact: AI is expected to propose new drugs and generate designs beyond human capability. This integration promises faster development times, higher success rates, and exploration of new targets, reshaping drug discovery. Read more

AI-Narrated Audiobooks on Audible

Audiobook Trends: Over 40,000 AI-voiced titles have been added to Audible since Amazon launched a tool for self-published authors to generate AI narrations. This makes audiobook creation more accessible but has sparked controversy.

Industry Reaction: Some listeners dislike the lack of filters to exclude AI narrations, and human narrators fear job losses. Major publishers are embracing AI for cost savings, highlighting tensions between creative integrity and commercial incentives. Read more

Apple’s M4 Chip for iPad Pro

Processor Introduction: Apple’s M4 chip, the latest and most powerful processor for the new iPad Pro, offers groundbreaking performance and efficiency.

Key Innovations: The M4 chip features a 10-core CPU, 10-core GPU, advanced AI capabilities, and power efficiency gains. These innovations enable superior graphics, real-time AI features, and all-day battery life. Read more

Google’s Pixel 8a Smartphone

Affordable Innovation: The Pixel 8a, Google’s latest affordable smartphone, is priced at $499 and packed with AI-powered features and impressive camera capabilities.

Key Highlights: The Pixel 8a features a refined design, dual rear camera, AI tools, and enhanced security. It also offers family-friendly features and 7 years of software support. Read more

OpenAI’s Media Manager Tool

Tool Development: OpenAI is building a Media Manager tool to help creators manage how their works are included in AI training data. This system aims to identify copyrighted material across sources.

AI Training Approach: OpenAI uses diverse public datasets and proprietary data to train its models, collaborating with creators, publishers, and regulators to support healthy ecosystems and respect intellectual property. Read more

Machine Learning in Sperm Whale Communication

Breakthrough Discovery: MIT CSAIL and Project CETI researchers have discovered a combinatorial coding system in sperm whale vocalizations, akin to a phonetic alphabet, using machine learning techniques.

Communication Insights: By analyzing a large dataset of whale codas, researchers identified patterns and structures, suggesting a complex communication system previously thought unique to humans. This finding opens new avenues for studying cetacean communication. Read more

Sam Altman’s Concerns About AI’s Economic Impact

CEO’s Warning: Sam Altman, CEO of OpenAI, has expressed significant concerns about AI’s potential impact on the labor market and economy, particularly job disruptions and economic changes.

Economic Threat: Studies suggest AI could automate up to 60% of jobs in advanced economies, leading to job losses and lower wages. Altman emphasizes the need to address these concerns proactively. Read more

AI Lecturers at Hong Kong University

Educational Innovation: HKUST is testing AI-generated virtual lecturers, including an AI version of Albert Einstein, to transform teaching methods and engage students.

Teaching Enhancement: AI lecturers aim to address teacher shortages and enhance learning experiences. While students find them approachable, some prefer human teachers for unique experiences. Read more

OpenAI’s NSFW Content Proposal

Content Policy Debate: OpenAI is considering allowing users to generate NSFW content, including erotica and explicit images, using its AI tools like ChatGPT and DALL-E. This proposal has sparked controversy.

Ethical Concerns: Critics argue it contradicts OpenAI’s mission of developing “safe and beneficial” AI. OpenAI acknowledges potential valid use cases but emphasizes responsible generation within appropriate contexts. Read more

Bumble’s Vision for AI in Dating

Future of Dating: Bumble founder Whitney Wolfe Herd envisions AI “dating concierges” streamlining the matching process by essentially going on dates to find compatible matches for users.

AI Assistance: These AI assistants could also provide dating coaching and advice. Despite concerns about AI companions forming unhealthy bonds, Bumble’s focus remains on fostering healthy relationships. Read more

Final Thoughts

This week’s updates showcase AI’s transformative power in areas like education, healthcare, and digital content creation. However, they also raise critical questions about ethics, job displacement, and intellectual property. As we look to the future, it’s essential to balance innovation with responsibility, ensuring AI advancements benefit society as a whole. Thanks for joining us, and stay tuned for more insights and updates in next week’s edition of “Last Week in AI.”

Last Week in AI: Episode 31 Read More »

From major announcements and groundbreaking innovations to debates on ethics and policy. We're covering the essential stories shaping the future of AI.

Last Week in AI: Episode 23

On this week’s edition of “Last Week in AI,” we’ll explore the latest developments from the world of AI. From major announcements and groundbreaking innovations to debates on ethics and policy. We’re covering the essential stories shaping the future of AI.


xAI’s Grok Now Open Source

Elon Musk has made xAI’s Grok1 AI chatbot open source, available on GitHub. This initiative invites the global community to contribute and enhance Grok1, positioning it as a competitor against OpenAI.

Key Takeaways:

  • Open-Source Release: Grok1’s technical foundation, including its model weights and architecture, is now accessible to all. Marking a significant move towards collaborative AI development.
  • Musk’s Vision for AI: Following his acquisition of Twitter, Musk advocates for transparency in AI, challenging the norm of proprietary models. His legal battle with OpenAI underscores his commitment to open-source principles.
  • Community Collaboration: By open-sourcing Grok1, xAI taps into the collective intelligence of the global tech community, accelerating the model’s evolution and refinement.
  • Initial Impressions: Initially, Grok1 required a subscription and did not significantly differentiate from other chatbots. However, this open-source strategy may significantly enhance its capabilities through widespread community input.

Why It Matters

Musk’s decision to open-source Grok1 reflects a strategic move towards fostering innovation through openness and collaboration. This approach emphasizes the potential of community-driven progress in enhancing AI technologies. As Grok1 evolves, it could emerge as a significant player in the AI chatbot arena.


ChatGPT-5: What We Know So Far

OpenAI’s upcoming ChatGPT-5 aims to bring us closer to achieving artificial general intelligence (AGI). With improvements in understanding and creating human-like text, this model promises to make conversations with AI indistinguishable from those with humans.

Key Takeaways:

  • Enhanced Comprehension and Production: ChatGPT-5 will offer a more nuanced understanding and generation of text. Elevating the user experience to one that feels more like interacting with another human.
  • Superior Reasoning and Reliability: Expect better reasoning abilities and more dependable responses from the new model.
  • Personalization and Multi-Modal Learning: Users can tailor ChatGPT-5 to their needs. It will incorporate learning from diverse data types, including images, audio, and video.
  • Anticipated Launch and Subscription Model: Slated for release in 2025, ChatGPT-5’s access might be bundled with ChatGPT Plus or Copilot Pro subscriptions.

Why It Matters

GPT-5 may make GPT-4 more accessible and affordable. This leap forward in AI capabilities holds the potential to revolutionize various sectors, making advanced AI tools more integral to our daily lives and work.


Perplexity AI Ready to Take on Google Search

Perplexity, an AI search engine, is making waves in the tech world. Backed by big names like Nvidia’s Jensen Huang, Shopify’s Tobi Lütke, and Mark Zuckerberg, this startup is quickly becoming a heavyweight in consumer AI.

Key Takeaways:

  • Impressive Backing and Growth: With over $74 million raised and a valuation surpassing $500 million, Perplexity’s rapid ascent is noteworthy. CEO Aravind Srinivas leads the charge.
  • Growing User Base: The platform boasts more than 1 million daily active users, highlighting its growing appeal.
  • Competing with Google: In certain search situations, especially those requiring definitive answers, Perplexity has shown it can outdo Google. Yet, it hasn’t fully convinced all users to switch.
  • Algorithm Details Under Wraps: The search didn’t reveal the inner workings of Perplexity’s algorithm, leaving its specific advantages and features a bit of a mystery.

Why It Matters

Perplexity’s ability to attract notable tech leaders and a substantial user base points to its potential. While it’s still early days, and not everyone’s ready to jump ship from Google, Perplexity’s progress suggests it’s a company to watch in the evolving landscape of search technology.


India Scraps AI Launch Approval Plan to Boost Innovation

The Indian government has abandoned its proposal to mandate approval for AI model launches. Instead they aim to encourage the growth of AI technologies without imposing regulatory hurdles.

Key Takeaways:

  • Revised Regulatory Approach: Initially proposed regulations requiring pre-launch approval for AI models have been withdrawn to avoid stifling innovation.
  • Stakeholder Feedback: The decision came after widespread criticism from industry experts and researchers, highlighting concerns over innovation and growth in the AI sector.
  • Alternative Strategies: The government will focus on promoting responsible AI development through programs and the development of guidelines and best practices.

Why It Matters

By dropping the approval requirement, India aims to create a more dynamic and innovative AI ecosystem. This approach seeks to balance the rapid advancement of AI technologies with the necessity for ethical development.


Cosmic Lounge: AI’s New Role in Game Development

Cosmic Lounge is capable of prototyping games in mere hours with their AI tool, Puzzle Engine. At Think Games 2024, cofounder Tomi Huttula showcased how it could revolutionize the development process with.

Key Takeaways:

  • Rapid Prototyping: Puzzle Engine streamlines game creation, generating levels, art, and logic through simple prompts, all within five to six hours.
  • Enhanced Productivity: The tool is designed to augment human creativity, offering feedback on game difficulty and monetization, which designers can refine.
  • Industry Implications: The introduction of generative AI in game development has stirred debates around job security, with the industry facing layoffs despite record profits.
  • Regulatory Moves: In response to growing AI use, Valve has set new guidelines for developers to declare AI involvement in game creation.

Why It Matters

Cosmic Lounge’s approach highlights AI as a collaborator, not a replacement, in the creative process, setting a precedent for the future of game development.


Midjourney Adjusts Terms Amid IP Controversies

Midjourney, known for its AI image and video generators, has updated its terms of service, reflecting its readiness to tackle intellectual property (IP) disputes in court.

Key Takeaways:

  • Strategic Confidence: The terms of service change shows Midjourney’s belief in winning legal battles over the use of creators’ works in its AI model training.
  • Fair Use Defense: The company leans on the fair use doctrine for using copyrighted materials for training, a stance not universally accepted by all creators.
  • Legal and Financial Risks: With $200 million in revenue, Midjourney faces the financial burden of potential lawsuits that could threaten its operations.

Why It Matters

Midjourney’s bold stance on IP and fair use highlights the ongoing tension between generative AI development and copyright law. The outcome of potential legal battles could set significant precedents for the AI industry.


Apple Acquires AI Startup DarwinAI

Apple has quietly acquired DarwinAI, a Canadian AI startup known for its vision-based technology aimed at improving manufacturing efficiency.

Key Takeaways:

  • Stealth Acquisition: While not officially announced, evidence of the acquisition comes from DarwinAI team members joining Apple’s machine learning teams, as indicated by their LinkedIn profiles.
  • Investment Background: DarwinAI had secured over $15 million in funding from notable investors.
  • Manufacturing and AI Optimization: DarwinAI’s technology focuses not only on manufacturing efficiency but also on optimizing AI models for speed and size. Thus, potentially enhancing on-device AI capabilities in future Apple products.
  • Apple’s AI Ambitions: Apple’s acquisition signals its intent to integrate GenAI features into its ecosystem. Tim Cook also hinted at new AI-driven functionalities expected to be revealed later this year.

Why It Matters

This strategic move could streamline Apple’s production lines and pave the way for innovative on-device AI features, potentially giving Apple a competitive edge in the race for AI dominance.


Bernie Sanders Proposes 32-Hour Workweek Bill

Senator Bernie Sanders has introduced a groundbreaking bill aiming to reduce the standard American workweek from 40 to 32 hours, without cutting worker pay, leveraging AI technology to boost worker benefits.

Key Takeaways:

  • Innovative Legislation: The Thirty-Two Hour Workweek Act, co-sponsored by Senator Laphonza Butler and Representative Mark Takano, plans to shorten work hours over three years.
  • Rationale: Sanders argues that increased worker productivity, fueled by AI and automation, should result in financial benefits for workers, not just executives and shareholders.
  • Global Context: Sanders highlighted that US workers work significantly more hours than their counterparts in Japan, the UK, and Germany, with less relative pay.
  • Inspired by Success: Following a successful four-day workweek trial in the UK, which showed positive effects on employee retention and productivity. Sanders is pushing for similar reforms in the US.
  • Challenges Ahead: The bill faces opposition from Republicans and a divided Senate, making its passage uncertain.

Why It Matters

If successful, it could set a new standard for work-life balance in the US and inspire similar changes worldwide. However, political hurdles may challenge its implementation.


EU Passes Landmark AI Regulation

The European Union has enacted the world’s first comprehensive AI legislation. The Artificial Intelligence Act aims to regulate AI technologies through a risk-based approach before public release.

Key Takeaways:

  • Risk-Based Framework: The legislation targets AI risks like hallucinations, deepfakes, and election manipulation, requiring compliance before market introduction.
  • Tech Community’s Concerns: Critics like Max von Thun highlight loopholes for public authorities and inadequate regulation of large foundation models, fearing tech monopolies’ growth.
  • Start-Up Optimism: Start-ups, such as Giskard, appreciate the clarity and potential for responsible AI development the regulation offers.
  • Debate on Risk Categorization: Calls for stricter classification of AI in the information space as high-risk underscore the law’s impact on fundamental rights.
  • Private Sector’s Role: EY’s Julie Linn Teigland emphasizes preparation for the AI sector, urging companies to understand their legal responsibilities under the new law.
  • Challenges for SMEs: Concerns arise about increased regulatory burdens on European SMEs, potentially favoring non-EU competitors.
  • Implementation Hurdles: Effective enforcement remains a challenge, with emphasis on resource allocation for the AI Office and the importance of including civil society in drafting general-purpose AI practices.

Why It Matters

While it aims to foster trust and safety in AI applications, the legislation’s real-world impact, especially concerning innovation and competition, invites a broad spectrum of opinions. Balancing regulation with innovation will be crucial.


Final thoughts

This week’s narratives underscore AI’s evolving role across technology, governance, and society. From fostering open innovation and enhancing conversational AI to navigating regulatory frameworks and reshaping work cultures, these developments highlight the complex interplay between AI’s potential and the ethical, legal, and social frameworks guiding its growth. As AI continues to redefine possibilities, the collective journey towards responsible and transformative AI use becomes ever more critical.

Last Week in AI: Episode 23 Read More »

Last week in AI updates

Last Week in AI

We’re seeing some fascinating developments in AI lately, from new apps and healthcare tools to major shifts in regulation and cybersecurity. Let’s dive into these updates.


OpenAI App Store Launch

OpenAI is about to shake things up by launching a store for GPTs, custom apps built on their AI models like GPT-4. Here’s what’s happening:

  1. GPT Store Launch: This new platform, announced at OpenAI’s DevDay, is set to open soon. It’s a place where developers can list their GPT-based apps.
  2. Rules for Developers: If you’re making a GPT app, you’ve got to follow OpenAI’s latest usage policies and brand guidelines to get your app on the store.
  3. Diverse Applications: These GPTs can do all sorts of things, from specialized Q&As to generating code that follows best practices.

What’s the big deal? Well, OpenAI is moving from just offering AI models to creating a whole ecosystem where others can build and share their AI-powered apps. This could really democratize how generative AI apps are made, though we’re still waiting to see the full impact of this move.


Google’s Fresh Approach to Training LLMs

Google’s DeepMind team is pushing the boundaries in robotics for 2024. They’re working on cool new ways to train robots using videos and big language models. Here’s the lowdown:

  1. Smarter Robots: The goal is to make robots that get what humans want and can adapt better. They’re moving away from robots that just do one thing over and over.
  2. AutoRT System: This new system uses big AI models to control a bunch of robots at once. These robots can work together and handle different tasks by understanding visual and language cues.
  3. RT-Trajectory for Learning: They’ve also got this new method that uses video to teach robots. It’s turning out to be more successful than older ways of training.

Basically, DeepMind is working on making robots more versatile and quick learners. It’s a big step from the robots we’re used to, and it could really change how we think about and use robots in the future.


Microsoft Copilot

Microsoft has been pretty sneaky, launching its Copilot app on Android, iOS, and iPadOS during the holidays. It’s like a portable AI buddy, based on the same tech as OpenAI’s ChatGPT. Here’s the lowdown:

  1. AI-Powered Assistant: Copilot (you might know it as Bing Chat) can help with all sorts of tasks. Drafting emails, summarizing texts, planning trips, and more – just by typing in your questions or instructions.
  2. Creative Boost with DALL·E 3: The app’s got this cool Image Creator feature powered by DALL·E 3. It lets you experiment with different styles, whip up social media posts, design logos, and even visualize storyboards for films and videos.
  3. Popular and Free Access to Advanced AI: It’s a hit! Over 1.5 million downloads across Android and iOS. What’s really neat is it uses the more advanced GPT-4 tech from OpenAI, and it’s free – unlike OpenAI’s GPT app that charges for GPT-4 access.

Microsoft’s move to make Copilot a standalone app, especially after rebranding Bing Chat, shows they’re serious about making AI more accessible and widespread. It’s a big step in bringing advanced AI right into our daily digital lives.


Perplexity AI

Perplexity AI is a new player in the search engine game, but with an AI twist. It’s like a chatbot that lets users ask questions in everyday language and gives back answers with sources. Here’s the scoop:

  1. Chatbot-Style Search: You ask questions, and it replies with summaries and citations, kind of like chatting with a super-smart friend. And you can dig deeper with follow-up questions.
  2. Pro Plan Perks: For those who want more, there’s a Pro plan. It has cool features like image generation, a Copilot for unlimited help, and even lets you upload files for the AI to analyze.
  3. Ambitious AI Goals: Perplexity isn’t stopping at search. They’re rolling out their own GenAI models that use their search data and the web for better performance. This is available to Pro users through an API.

But, with great AI comes great responsibility. There are worries about misuse and misinformation, plus the costs and copyright issues since GenAI models learn from heaps of web content. Despite these challenges, Perplexity has raised a lot of money and boasts 10 million active users each month. It’s definitely a name to watch in the AI search world!


AI Regulations

In 2024, there’s more action on AI rules globally. Last year saw big steps in setting these up. Now, countries like the U.S., the European Union, and China are each crafting their own AI laws, and other regions are joining in with their approaches to AI and its effects.

Three key takeaways:

  1. The US, EU, and China each have their unique strategies for AI regulations, reflecting their influence in the AI sector.
  2. These upcoming regulations will significantly impact companies, especially those in AI.
  3. It’s not just about tech; these rules are shaping international politics and relationships.

In short, AI regulation is evolving rapidly, making a notable impact on businesses and global politics. It’s a crucial area to watch for anyone interested in the future of AI and its governance.


AI Cybersecurity

AI trends are really shaping up, especially in cybersecurity. Last year, generative AI was a big deal, and it’s going to have an even bigger impact this year. Here’s what’s going on:

Key points:

  1. AI’s use, misuse, and importance in cybersecurity is a hot topic. Think of stuff like cyberattacks and data insecurity.
  2. Experts are talking about both the challenges and opportunities AI brings, like its role in detecting threats or creating malware.
  3. There’s a big focus on how AI might be misused for things like deep fakes and spreading false info.

In essence, AI is really changing the game in cybersecurity, with lots of potential for good and bad. It’s crucial for organizations to stay alert and understand how to handle these AI tools.


Data Ownership

The big thing in tech right now is all about who owns and controls data. We’re moving from a world where personal data was used freely to one where privacy and even data ownership rights are taking center stage. Think of it like data becoming the new “oil” for AI.

Here’s what’s happening:

  1. Laws like the GDPR kicked off this trend. Now, places like Brazil are also getting serious about data privacy and investing in regulations.
  2. This change is cutting down on the free-for-all use of personal data. Instead, we’re seeing new systems that give people more control over their data.
  3. Big names like Apple’s CEO, Tim Cook, are pushing for these changes, focusing on protecting and empowering consumers.

So, what’s the bottom line? Data ownership is becoming a huge deal in tech. It’s not just about privacy anymore; it’s about giving people a say in how their data is used, which is a game-changer for everyone in the data economy.


Investing in AI

In 2024, AI investing looks like it’s moving beyond just hype. Investors are keen on funding AI startups and are expecting this trend to keep up. But now, there’s a shift towards more sustainable, focused businesses in AI.

Here’s the scoop:

  1. We’re anticipating a new wave of AI startups. These aren’t just building on tech from giants like OpenAI or Google, but are more specialized and sector-specific.
  2. Investors like Lisa Wu from Norwest Venture Partners see big potential in these specialized AI businesses. They’re seen as safer bets because they’re not easy for big companies to just replicate.
  3. These startups are all about knowing their specific users and using AI to boost productivity. For example, law firms are using AI to work more efficiently and get better results at lower costs.

In short, AI investing is maturing. It’s less about general hype and more about creating targeted solutions that really understand and improve specific industries.


AI in Healthcare

Nabla, a Paris-based startup, is making big moves in healthcare with its AI doctor’s assistant. They’ve just bagged $24 million in Series B funding, and here’s why they’re a game-changer:

  1. Revolutionizing Medical Documentation: Nabla’s AI helps doctors by transcribing conversations, highlighting important info, and creating medical reports quickly. It’s all about boosting doctors’ efficiency, not replacing them.
  2. Focus on Data Processing: They put privacy first. No storing audio or notes without clear consent. Plus, they’re keen on accuracy, allowing doctors to share notes for transcription error correction.
  3. Impact and Future Goals: This AI tool is already helping thousands of doctors in the U.S., especially with the Permanente Medical Group. Nabla aims for FDA approval and wants to keep doctors at the heart of healthcare.

In short, Nabla’s AI is here to assist doctors, not take over their jobs. With this new funding, they’re set to transform how doctors use technology, all while maintaining strict privacy standards. It’s an exciting step forward for AI in healthcare. 🚀💡🏥


Final Thoughts

In the AI world, big things are happening! OpenAI’s new store, Google’s smart robots, Microsoft’s Copilot app, and Perplexity AI’s search engine are shaking things up. Plus, AI’s role in healthcare, data ownership, and global regulations are evolving fast. It’s a thrilling time for AI, with major changes and innovations all around! 🌐💡🤖

Last Week in AI Read More »

Diverse Group of Business Leaders Discussing AI Ethics

ISO/IEC 42001: The Right Path for AI?

The world of AI is buzzing with the release of the ISO/IEC 42001 standard. It’s meant to guide organizations in responsible AI management, but is it the best approach? Let’s weigh the pros and cons.

The Good Stuff About ISO/IEC 42001

Transparency and Explainability: It aims to make AI understandable, which is super important. You want to know how and why AI makes decisions, right?

Universally Applicable: This standard is for everyone, no matter the industry or company size. That sounds great for consistency.

Trustworthy AI: It’s all about building AI systems that are safe and reliable. This could really boost public trust in AI.

But, Are There Downsides?

One Size Fits All?: Can one standard really cover the huge diversity in AI applications? What works for one industry might not for another.

Complexity: Implementing these standards could be tough, especially for smaller companies. Will they have the resources to keep up?

Innovation vs. Regulation: Could these rules slow down AI innovation? Sometimes too many rules stifle creativity.

What’s the Real Impact?

Risk Mitigation: It helps identify and manage risks, which is definitely a good thing. No one wants out-of-control AI.

Human-Centric Focus: Prioritizing safety and user experience is awesome. We don’t want AI that’s harmful or hard to use.

Setting a Global Benchmark: It could set a high bar for AI globally. But will all countries and companies jump on board?

In a nutshell, ISO/IEC 42001 has some solid goals, aiming for ethical, understandable AI. But we’ve got to ask: Will it work for everyone? Could it slow down AI progress? It’s a big step, but whether it’s the right one is still up for debate. For organizations stepping into AI, it’s a guide worth considering but also questioning.

This standard could shape the future of AI – but it’s crucial to balance innovation with responsibility. What do you think? Is ISO/IEC 42001 the way to go, or do we need a different approach?

Read more on what businesses need to know in order to navigate this tricky AI terrain.

ISO/IEC 42001: The Right Path for AI? Read More »

EU’s Big Move on AI: A Simple Breakdown

Hey there, tech folks! Big news from the EU – they’ve just rolled out a plan to keep AI in check. It’s a huge deal and kind of a first of its kind. Let’s break it down.

What’s the Buzz?

So, the EU lawmakers got together and decided it’s time to regulate AI. This isn’t just any agreement; it’s being called a “global first.” They’re setting up new rules for how AI should work.

The New AI Rules

Here’s the scoop:

  • Total Ban on Some AI Uses: The EU is saying no to AI for things like scanning faces randomly and categorizing people without a specific reason. It’s all about using AI responsibly.
  • High-Risk AI Gets Special Attention: AI that’s considered ‘high risk’ will have to follow some strict new rules.
  • A Two-Tier System: Even regular AI systems have to stick to these new guidelines.

Helping Startups and Innovators

It’s not all about restrictions, though. The EU is also setting up ways to help small companies test their AI safely before it goes to market. Think of it like a playground where startups can test their AI toys.

The Timeline

This new AI Act is set to kick in soon, but the full impact might not show until around 2026. The EU is taking its time to make sure everything works out smoothly.

Why Does This Matter?

This agreement is a big step for tech in Europe. It’s about making sure AI is safe and used in the right way. The EU is trying to balance being innovative with respecting people’s rights and values.

Wrapping Up

So, there you have it! The EU is making some bold moves in AI. For anyone into tech, this is something to watch. It’s about shaping how AI grows and making sure it’s good for everyone.

For more AI and ethics read our Ethical Maze of AI: A Guide for Businesses.

EU’s Big Move on AI: A Simple Breakdown Read More »