AI language models

"Last Week in AI" including OpenAI, Stack Overflow, Apple's new Photos app, YouTube Premium, Microsoft MAI-1, Eli Lilly, Audible, Apple's M4 chip, Google's Pixel 8a, machine learning in whale communication, and more.

Last Week in AI: Episode 31

Hey everyone, welcome to this week’s edition of “Last Week in AI.” This week’s stories provide a glimpse into how AI is reshaping industries and our daily lives. Let’s dive in and explore these fascinating developments together.

OpenAI and Stack Overflow Partnership

Partnership Announcement: OpenAI and Stack Overflow have formed a new API partnership to leverage their collective strengths—Stack Overflow’s technical knowledge platform and OpenAI’s language models.

Impact and Controversy: This partnership aims to empower developers by combining high-quality technical content with advanced AI models. However, some Stack Overflow users have protested, arguing it exploits their contributed labor without consent, leading to bans and post reverts by staff. This raises questions about content creator attribution and future model training, despite the potential for improved AI models. Read more

Apple’s New Photos App Feature

Feature Introduction: Apple is set to introduce a “Clean Up” feature in its Photos app update, leveraging generative AI for advanced image editing. This tool will allow users to remove objects from photos using a brush tool, similar to Adobe’s Content-Aware Fill.

Preview and Positioning: Currently in testing on macOS 15, Apple may preview this feature during the “Let Loose” iPad event on March 18, 2023. This positions the new iPads as AI-equipped devices, showcasing practical AI applications beyond chatbots and entertainment. Read more

YouTube Premium’s AI “Jump Ahead” Feature

Feature Testing: YouTube Premium subscribers can now test an AI-powered “Jump ahead” feature, allowing them to skip commonly skipped video sections. By double-tapping to skip, users can jump to the point where most viewers typically resume watching.

Availability and Aim: This feature is currently available on the YouTube Android app in the US for English videos and requires a Premium subscription. It complements YouTube’s “Ask” feature and aims to enhance the viewing experience by leveraging AI and user data. Read more

Microsoft’s MAI-1 Language Model Development

Model Development: Microsoft is developing a new large-scale AI language model, MAI-1, led by Mustafa Suleyman, the former CEO of Inflection AI. MAI-1 will have approximately 500 billion parameters, significantly larger than Microsoft’s previous models.

Strategic Significance: This development signifies Microsoft’s dual approach to AI, focusing on both small and large models. Despite its investment in OpenAI, Microsoft is independently advancing its AI capabilities, with plans to unveil MAI-1 at their Build conference. Read more

AI in Drug Discovery at Eli Lilly

Innovative Discovery: The pharmaceutical industry is integrating AI into drug discovery, with Eli Lilly scientists noting innovative molecular designs generated by AI. This marks a precedent in AI-driven biology breakthroughs.

Industry Impact: AI is expected to propose new drugs and generate designs beyond human capability. This integration promises faster development times, higher success rates, and exploration of new targets, reshaping drug discovery. Read more

AI-Narrated Audiobooks on Audible

Audiobook Trends: Over 40,000 AI-voiced titles have been added to Audible since Amazon launched a tool for self-published authors to generate AI narrations. This makes audiobook creation more accessible but has sparked controversy.

Industry Reaction: Some listeners dislike the lack of filters to exclude AI narrations, and human narrators fear job losses. Major publishers are embracing AI for cost savings, highlighting tensions between creative integrity and commercial incentives. Read more

Apple’s M4 Chip for iPad Pro

Processor Introduction: Apple’s M4 chip, the latest and most powerful processor for the new iPad Pro, offers groundbreaking performance and efficiency.

Key Innovations: The M4 chip features a 10-core CPU, 10-core GPU, advanced AI capabilities, and power efficiency gains. These innovations enable superior graphics, real-time AI features, and all-day battery life. Read more

Google’s Pixel 8a Smartphone

Affordable Innovation: The Pixel 8a, Google’s latest affordable smartphone, is priced at $499 and packed with AI-powered features and impressive camera capabilities.

Key Highlights: The Pixel 8a features a refined design, dual rear camera, AI tools, and enhanced security. It also offers family-friendly features and 7 years of software support. Read more

OpenAI’s Media Manager Tool

Tool Development: OpenAI is building a Media Manager tool to help creators manage how their works are included in AI training data. This system aims to identify copyrighted material across sources.

AI Training Approach: OpenAI uses diverse public datasets and proprietary data to train its models, collaborating with creators, publishers, and regulators to support healthy ecosystems and respect intellectual property. Read more

Machine Learning in Sperm Whale Communication

Breakthrough Discovery: MIT CSAIL and Project CETI researchers have discovered a combinatorial coding system in sperm whale vocalizations, akin to a phonetic alphabet, using machine learning techniques.

Communication Insights: By analyzing a large dataset of whale codas, researchers identified patterns and structures, suggesting a complex communication system previously thought unique to humans. This finding opens new avenues for studying cetacean communication. Read more

Sam Altman’s Concerns About AI’s Economic Impact

CEO’s Warning: Sam Altman, CEO of OpenAI, has expressed significant concerns about AI’s potential impact on the labor market and economy, particularly job disruptions and economic changes.

Economic Threat: Studies suggest AI could automate up to 60% of jobs in advanced economies, leading to job losses and lower wages. Altman emphasizes the need to address these concerns proactively. Read more

AI Lecturers at Hong Kong University

Educational Innovation: HKUST is testing AI-generated virtual lecturers, including an AI version of Albert Einstein, to transform teaching methods and engage students.

Teaching Enhancement: AI lecturers aim to address teacher shortages and enhance learning experiences. While students find them approachable, some prefer human teachers for unique experiences. Read more

OpenAI’s NSFW Content Proposal

Content Policy Debate: OpenAI is considering allowing users to generate NSFW content, including erotica and explicit images, using its AI tools like ChatGPT and DALL-E. This proposal has sparked controversy.

Ethical Concerns: Critics argue it contradicts OpenAI’s mission of developing “safe and beneficial” AI. OpenAI acknowledges potential valid use cases but emphasizes responsible generation within appropriate contexts. Read more

Bumble’s Vision for AI in Dating

Future of Dating: Bumble founder Whitney Wolfe Herd envisions AI “dating concierges” streamlining the matching process by essentially going on dates to find compatible matches for users.

AI Assistance: These AI assistants could also provide dating coaching and advice. Despite concerns about AI companions forming unhealthy bonds, Bumble’s focus remains on fostering healthy relationships. Read more

Final Thoughts

This week’s updates showcase AI’s transformative power in areas like education, healthcare, and digital content creation. However, they also raise critical questions about ethics, job displacement, and intellectual property. As we look to the future, it’s essential to balance innovation with responsibility, ensuring AI advancements benefit society as a whole. Thanks for joining us, and stay tuned for more insights and updates in next week’s edition of “Last Week in AI.”

Last Week in AI: Episode 31 Read More »

Illustration representation of the Mixture of Experts architecture in Mixtral 8x7B

Mixtral 8x7B: The Open-Source in AI Language Models


Mistral AI rolls out their Mixtral 8x7B Instruct model on the OctoAI Text Gen Solution. It’s big news for AI enthusiasts and builders alike. Here’s why this model is turning heads:

Mixtral 8x7B: A New Star on the Horizon

The Mixtral 8x7B Instruct model is making waves as a top-tier, open-source alternative to GPT 3.5. What makes it stand out? Well, for starters, it’s high-quality and comes with a cost that’s 4x lower per token than GPT 3.5. Talk about a budget-friendly AI solution!

Outperforming the Giants

Mistral AI isn’t playing around. Their model, with its fancy sparse Mixture of Experts (MoE) architecture, has already shown it can outdo big names like Llama 2 70B and GPT 3.5 in various benchmarks. That’s no small feat!

Why Choose Mixtral 8x7B?

  • Open Source Advantage: Love tinkering? This model’s open-source nature gives you all the flexibility to play around and tailor it to your needs.
  • Competitive Performance: It’s not just about being cheaper. This model packs a punch in performance, standing toe-to-toe with the big players in the AI field.
  • User-Friendly Experience: Thanks to OctoAI’s platform, you get a unified API endpoint, model acceleration, and reliable scalability. It’s like having the best tools at your fingertips.

Get Started, No Cost Attached!

Curious to try it out? You can dive into the Mixtral 8x7B Instruct model today without spending a dime. Just sign up for the OctoAI Text Gen Solution, and you’re good to go.

A Community-Centric Approach

Mistral AI is all about listening to their community. Adding Mixtral to the OctoAI model library is a nod to the power of collaborative input. And if you’re into networking, hop onto their Discord to connect with the OctoAI team and fellow AI enthusiasts.

What’s Next?

For those of you using closed-source LLMs, this could be your chance to switch gears. Mistral AI’s new promotion is all about easing the transition to open-source LLMs in your applications.

Wrapping Up

Mistral AI’s announcement about Mixtral is just the tip of the iceberg. If you’re keen on exploring more about this cool AI development, check out the full release announcement from Mistral. And don’t forget, engaging with the OctoAI community is just a Discord sign-up away!

There you have it, folks – Mixtral 8x7B is here to shake things up in the AI world. Excited to see where this leads? So are we! 🚀💻🤖

Mixtral 8x7B: The Open-Source in AI Language Models Read More »

AI Transparency

AI Transparency: AI’s Secretive Nature

In an era where Artificial Intelligence (AI) intertwines with our daily lives, the call for AI Transparency is louder than ever. A recent study from Stanford University casts light on the secretive nature of modern AI systems. Especially notable is GPT-4, the powerhouse behind ChatGPT. This narrative aims to unravel this secrecy, highlighting the potential dangers to both the scientific community and beyond.

The Enigmatic Transparency 🕵️

Venturing into a quest for transparency, Stanford researchers examined 10 distinct AI systems. They spotlighted large language models akin to GPT-4. Their findings? Somewhat disconcerting – none of the models surpassed a 54 percent mark on their transparency scale across all criteria. This opacity isn’t a mere glitch; it’s seen by some as a feature. It veils the complex mechanics from prying eyes, retaining a competitive edge. Yet, this concealment comes at a cost. It threatens to morph the field from an open scientific endeavor to a fortress of proprietary secrets.

  • A glaring instance is GPT-4’s clandestine nature, which leaves many in the AI community and the general populace in a realm of conjecture.
  • The quest for profitability, some argue, is overshadowing the noble pursuit of knowledge and shared understanding in the AI domain.

AI’s Growing Clout Amid Secrecy 🌐

As AI’s influence burgeons, the veil of secrecy encasing it seems to thicken. This paradox isn’t merely an academic conundrum; it’s a societal quandary. The opaque nature of these AI behemoths creates a realm where only a select few hold the keys to the AI kingdom. Consequently, the rest are left in a state of dependency and ignorance.

  • The ubiquitous deployment of AI models across sectors underscores the urgency for greater transparency.
  • Experts’ ringing alarm bells about the risks of masking AI’s inner workings echo across the tech realm.

The Clarion Call for Openness 🔊

The narrative from Stanford illuminates a pathway towards mitigating the risks associated with AI’s opaque demeanor. The call for more openness isn’t just a theoretical plea but a pragmatic step. It aims at fostering a culture of shared knowledge and responsible AI deployment.

Addressing Common Misconceptions

Openness in AI doesn’t equate to a compromise in competitive advantage. It’s about nurturing a symbiotic ecosystem where innovation and transparency thrive concurrently.

Tackling practical implications

More transparency could pave the way for robust community-driven scrutiny. This ensures the safe and ethical utilization of AI technologies.

The Key Takeaway 🔑

A shift towards transparency isn’t merely beneficial; it’s imperative. It fosters the sustainable growth of AI as a scientific field and a societal asset. It’s about relegating the fears associated with AI’s obscure nature to the annals of history. Additionally, it champions a future where AI serves as an open book, ready to be read, understood, and enhanced by all and sundry.

FAQs

  1. How does the secrecy around AI impact the scientific community? The secrecy can stifle the free flow of ideas, innovations, and collaborations. It turns the field into a competitive race shrouded in proprietary veils. This shift veers away from an open frontier of exploration and shared knowledge.
  2. What does the lack of transparency in AI entail? Lack of transparency in AI leads to a myriad of challenges. It includes a lack of understanding of how decisions are made by AI systems, potential bias, and a lack of accountability. Moreover, it hampers the ability of users and stakeholders to interrogate or challenge AI-driven decisions. This makes it a pressing concern.
  3. What measures can be taken to foster transparency in AI? Measures can include open-source initiatives, transparent reporting of AI methodologies, and data handling practices. Additionally, third-party audits, and creating standards and certifications for transparency and ethical AI practices are beneficial. These steps collectively contribute to fostering transparency in AI.
  4. How can the general populace be educated about the workings of AI? Initiatives like public forums, educational courses, and open-access resources are crucial. Transparent communication from organizations and governments can also play a vital role. These efforts help in demystifying AI for the general populace.
  5. Why is the shift towards transparency termed as a ‘pragmatic’ step? It’s termed pragmatic as it addresses real-world concerns like trust, accountability, and ethical considerations. This ensures AI technologies are developed and deployed responsibly. Therefore, it benefits a broader spectrum of society, making the shift towards transparency a practical and necessary step.

In Conclusion

As we unveil the shroud covering AI, the journey from obscurity to transparency emerges as not just a scientific necessity, but a societal obligation. The discourse around AI’s secrecy isn’t merely academic; it’s a dialogue that beckons us all. As AI becomes a staple in our digital lives, the narrative from Stanford University is a stark reminder. The time for fostering openness in AI is now. Let’s embrace the future of AI with open arms and open codes.

For more AI news, check out our blog.

AI Transparency: AI’s Secretive Nature Read More »