E
ChatBot cancel
smart_toy

Hello, I'm VAI. Are you new to Vease?

Example Image ChatBot cancel
Hello, I'm VAI. Are you new to Vease?

AI and Legislation

Last week in AI updates

Last Week in AI

We’re seeing some fascinating developments in AI lately, from new apps and healthcare tools to major shifts in regulation and cybersecurity. Let’s dive into these updates.


OpenAI App Store Launch

OpenAI is about to shake things up by launching a store for GPTs, custom apps built on their AI models like GPT-4. Here’s what’s happening:

  1. GPT Store Launch: This new platform, announced at OpenAI’s DevDay, is set to open soon. It’s a place where developers can list their GPT-based apps.
  2. Rules for Developers: If you’re making a GPT app, you’ve got to follow OpenAI’s latest usage policies and brand guidelines to get your app on the store.
  3. Diverse Applications: These GPTs can do all sorts of things, from specialized Q&As to generating code that follows best practices.

What’s the big deal? Well, OpenAI is moving from just offering AI models to creating a whole ecosystem where others can build and share their AI-powered apps. This could really democratize how generative AI apps are made, though we’re still waiting to see the full impact of this move.


Google’s Fresh Approach to Training LLMs

Google’s DeepMind team is pushing the boundaries in robotics for 2024. They’re working on cool new ways to train robots using videos and big language models. Here’s the lowdown:

  1. Smarter Robots: The goal is to make robots that get what humans want and can adapt better. They’re moving away from robots that just do one thing over and over.
  2. AutoRT System: This new system uses big AI models to control a bunch of robots at once. These robots can work together and handle different tasks by understanding visual and language cues.
  3. RT-Trajectory for Learning: They’ve also got this new method that uses video to teach robots. It’s turning out to be more successful than older ways of training.

Basically, DeepMind is working on making robots more versatile and quick learners. It’s a big step from the robots we’re used to, and it could really change how we think about and use robots in the future.


Microsoft Copilot

Microsoft has been pretty sneaky, launching its Copilot app on Android, iOS, and iPadOS during the holidays. It’s like a portable AI buddy, based on the same tech as OpenAI’s ChatGPT. Here’s the lowdown:

  1. AI-Powered Assistant: Copilot (you might know it as Bing Chat) can help with all sorts of tasks. Drafting emails, summarizing texts, planning trips, and more – just by typing in your questions or instructions.
  2. Creative Boost with DALL·E 3: The app’s got this cool Image Creator feature powered by DALL·E 3. It lets you experiment with different styles, whip up social media posts, design logos, and even visualize storyboards for films and videos.
  3. Popular and Free Access to Advanced AI: It’s a hit! Over 1.5 million downloads across Android and iOS. What’s really neat is it uses the more advanced GPT-4 tech from OpenAI, and it’s free – unlike OpenAI’s GPT app that charges for GPT-4 access.

Microsoft’s move to make Copilot a standalone app, especially after rebranding Bing Chat, shows they’re serious about making AI more accessible and widespread. It’s a big step in bringing advanced AI right into our daily digital lives.


Perplexity AI

Perplexity AI is a new player in the search engine game, but with an AI twist. It’s like a chatbot that lets users ask questions in everyday language and gives back answers with sources. Here’s the scoop:

  1. Chatbot-Style Search: You ask questions, and it replies with summaries and citations, kind of like chatting with a super-smart friend. And you can dig deeper with follow-up questions.
  2. Pro Plan Perks: For those who want more, there’s a Pro plan. It has cool features like image generation, a Copilot for unlimited help, and even lets you upload files for the AI to analyze.
  3. Ambitious AI Goals: Perplexity isn’t stopping at search. They’re rolling out their own GenAI models that use their search data and the web for better performance. This is available to Pro users through an API.

But, with great AI comes great responsibility. There are worries about misuse and misinformation, plus the costs and copyright issues since GenAI models learn from heaps of web content. Despite these challenges, Perplexity has raised a lot of money and boasts 10 million active users each month. It’s definitely a name to watch in the AI search world!


AI Regulations

In 2024, there’s more action on AI rules globally. Last year saw big steps in setting these up. Now, countries like the U.S., the European Union, and China are each crafting their own AI laws, and other regions are joining in with their approaches to AI and its effects.

Three key takeaways:

  1. The US, EU, and China each have their unique strategies for AI regulations, reflecting their influence in the AI sector.
  2. These upcoming regulations will significantly impact companies, especially those in AI.
  3. It’s not just about tech; these rules are shaping international politics and relationships.

In short, AI regulation is evolving rapidly, making a notable impact on businesses and global politics. It’s a crucial area to watch for anyone interested in the future of AI and its governance.


AI Cybersecurity

AI trends are really shaping up, especially in cybersecurity. Last year, generative AI was a big deal, and it’s going to have an even bigger impact this year. Here’s what’s going on:

Key points:

  1. AI’s use, misuse, and importance in cybersecurity is a hot topic. Think of stuff like cyberattacks and data insecurity.
  2. Experts are talking about both the challenges and opportunities AI brings, like its role in detecting threats or creating malware.
  3. There’s a big focus on how AI might be misused for things like deep fakes and spreading false info.

In essence, AI is really changing the game in cybersecurity, with lots of potential for good and bad. It’s crucial for organizations to stay alert and understand how to handle these AI tools.


Data Ownership

The big thing in tech right now is all about who owns and controls data. We’re moving from a world where personal data was used freely to one where privacy and even data ownership rights are taking center stage. Think of it like data becoming the new “oil” for AI.

Here’s what’s happening:

  1. Laws like the GDPR kicked off this trend. Now, places like Brazil are also getting serious about data privacy and investing in regulations.
  2. This change is cutting down on the free-for-all use of personal data. Instead, we’re seeing new systems that give people more control over their data.
  3. Big names like Apple’s CEO, Tim Cook, are pushing for these changes, focusing on protecting and empowering consumers.

So, what’s the bottom line? Data ownership is becoming a huge deal in tech. It’s not just about privacy anymore; it’s about giving people a say in how their data is used, which is a game-changer for everyone in the data economy.


Investing in AI

In 2024, AI investing looks like it’s moving beyond just hype. Investors are keen on funding AI startups and are expecting this trend to keep up. But now, there’s a shift towards more sustainable, focused businesses in AI.

Here’s the scoop:

  1. We’re anticipating a new wave of AI startups. These aren’t just building on tech from giants like OpenAI or Google, but are more specialized and sector-specific.
  2. Investors like Lisa Wu from Norwest Venture Partners see big potential in these specialized AI businesses. They’re seen as safer bets because they’re not easy for big companies to just replicate.
  3. These startups are all about knowing their specific users and using AI to boost productivity. For example, law firms are using AI to work more efficiently and get better results at lower costs.

In short, AI investing is maturing. It’s less about general hype and more about creating targeted solutions that really understand and improve specific industries.


AI in Healthcare

Nabla, a Paris-based startup, is making big moves in healthcare with its AI doctor’s assistant. They’ve just bagged $24 million in Series B funding, and here’s why they’re a game-changer:

  1. Revolutionizing Medical Documentation: Nabla’s AI helps doctors by transcribing conversations, highlighting important info, and creating medical reports quickly. It’s all about boosting doctors’ efficiency, not replacing them.
  2. Focus on Data Processing: They put privacy first. No storing audio or notes without clear consent. Plus, they’re keen on accuracy, allowing doctors to share notes for transcription error correction.
  3. Impact and Future Goals: This AI tool is already helping thousands of doctors in the U.S., especially with the Permanente Medical Group. Nabla aims for FDA approval and wants to keep doctors at the heart of healthcare.

In short, Nabla’s AI is here to assist doctors, not take over their jobs. With this new funding, they’re set to transform how doctors use technology, all while maintaining strict privacy standards. It’s an exciting step forward for AI in healthcare. 🚀💡🏥


Final Thoughts

In the AI world, big things are happening! OpenAI’s new store, Google’s smart robots, Microsoft’s Copilot app, and Perplexity AI’s search engine are shaking things up. Plus, AI’s role in healthcare, data ownership, and global regulations are evolving fast. It’s a thrilling time for AI, with major changes and innovations all around! 🌐💡🤖

Diverse Group of Business Leaders Discussing AI Ethics

ISO/IEC 42001: The Right Path for AI?

The world of AI is buzzing with the release of the ISO/IEC 42001 standard. It’s meant to guide organizations in responsible AI management, but is it the best approach? Let’s weigh the pros and cons.

The Good Stuff About ISO/IEC 42001

Transparency and Explainability: It aims to make AI understandable, which is super important. You want to know how and why AI makes decisions, right?

Universally Applicable: This standard is for everyone, no matter the industry or company size. That sounds great for consistency.

Trustworthy AI: It’s all about building AI systems that are safe and reliable. This could really boost public trust in AI.

But, Are There Downsides?

One Size Fits All?: Can one standard really cover the huge diversity in AI applications? What works for one industry might not for another.

Complexity: Implementing these standards could be tough, especially for smaller companies. Will they have the resources to keep up?

Innovation vs. Regulation: Could these rules slow down AI innovation? Sometimes too many rules stifle creativity.

What’s the Real Impact?

Risk Mitigation: It helps identify and manage risks, which is definitely a good thing. No one wants out-of-control AI.

Human-Centric Focus: Prioritizing safety and user experience is awesome. We don’t want AI that’s harmful or hard to use.

Setting a Global Benchmark: It could set a high bar for AI globally. But will all countries and companies jump on board?

In a nutshell, ISO/IEC 42001 has some solid goals, aiming for ethical, understandable AI. But we’ve got to ask: Will it work for everyone? Could it slow down AI progress? It’s a big step, but whether it’s the right one is still up for debate. For organizations stepping into AI, it’s a guide worth considering but also questioning.

This standard could shape the future of AI – but it’s crucial to balance innovation with responsibility. What do you think? Is ISO/IEC 42001 the way to go, or do we need a different approach?

Read more on what businesses need to know in order to navigate this tricky AI terrain.

EU’s Big Move on AI: A Simple Breakdown

Hey there, tech folks! Big news from the EU – they’ve just rolled out a plan to keep AI in check. It’s a huge deal and kind of a first of its kind. Let’s break it down.

What’s the Buzz?

So, the EU lawmakers got together and decided it’s time to regulate AI. This isn’t just any agreement; it’s being called a “global first.” They’re setting up new rules for how AI should work.

The New AI Rules

Here’s the scoop:

  • Total Ban on Some AI Uses: The EU is saying no to AI for things like scanning faces randomly and categorizing people without a specific reason. It’s all about using AI responsibly.
  • High-Risk AI Gets Special Attention: AI that’s considered ‘high risk’ will have to follow some strict new rules.
  • A Two-Tier System: Even regular AI systems have to stick to these new guidelines.

Helping Startups and Innovators

It’s not all about restrictions, though. The EU is also setting up ways to help small companies test their AI safely before it goes to market. Think of it like a playground where startups can test their AI toys.

The Timeline

This new AI Act is set to kick in soon, but the full impact might not show until around 2026. The EU is taking its time to make sure everything works out smoothly.

Why Does This Matter?

This agreement is a big step for tech in Europe. It’s about making sure AI is safe and used in the right way. The EU is trying to balance being innovative with respecting people’s rights and values.

Wrapping Up

So, there you have it! The EU is making some bold moves in AI. For anyone into tech, this is something to watch. It’s about shaping how AI grows and making sure it’s good for everyone.

For more AI and ethics read our Ethical Maze of AI: A Guide for Businesses.