acelerap.com

A New Era for Siri: The Emergence of OpenELM

Written on

Chapter 1: The Evolution of Voice Assistants

When Siri debuted in 2011, it felt like a technological breakthrough, allowing users to interact with their devices using voice commands to access information and send messages. However, over the years, Siri's standing has diminished significantly; despite continuous updates from Apple, it has fallen behind competitors in areas such as contextual understanding and overall integration.

Interestingly, Apple has been contemplating voice interactions between humans and computers for three decades. Discussions around this topic date back to a 1987 video. If this has been on Apple's radar for so long, why does Siri still lag?

Siri was not originally developed by Apple; it was acquired and subsequently integrated into their ecosystem. After its purchase, Apple appeared to limit Siri's capabilities, confining it to basic tasks like providing weather updates, sports scores, and controlling device functions. The original creators of Siri departed to create a new AI service named Viv.

In recent years, Apple has remained somewhat cautious about the generative AI trend. However, recent developments indicate a shift in their approach. It seems that Apple is now recognizing the significance of generative AI, albeit somewhat late. Interestingly, Apple's focus does not appear to be on developing a colossal Large Language Model (LLM) akin to those from Google or OpenAI.

A few days ago, Apple unveiled OpenELM, a new open-source model. These models are relatively small, with the largest containing up to 3 billion parameters, as noted on the HuggingFace site. The authors have released four models with varying parameter counts, as well as code and different versions that have been fine-tuned for responsiveness to user inquiries. According to the creators, OpenELM demonstrates superior performance compared to existing open LLMs trained on public datasets.

OpenELM was trained on 1.8 trillion tokens from a diverse range of datasets, including scientific articles, code, web pages, books, and social media. While its performance rivals that of similarly sized models, such as OLMO, it remains relatively unimpressive in the broader landscape.

One notable aspect of OpenELM is that it is released under a permissive license, allowing for commercial use as long as Apple's attribution is maintained. This move is quite atypical for Apple, which is generally known for its secretive, closed-source approach.

Comparatively, Meta's LLaMA3 has been released in an 8B version, while Apple has chosen to limit its models to 3B. This suggests a strategic focus on optimizing generative AI for device performance, particularly for smartphones and computers.

Reports indicate that Apple is working to incorporate AI into iPhones, possibly in upcoming iOS releases. This development may not solely rely on in-house expertise but could also involve collaborations. According to Bloomberg, discussions between Apple and OpenAI about potential integrations into iOS 18 have recently resumed.

If Apple genuinely intends to integrate generative AI into iOS, the most suitable application for an LLM would likely be Siri. Despite the nostalgia associated with its name, Siri's effectiveness has diminished over time. Some argue that it might be time to retire Siri in favor of an LLM with a fresh identity.

As groundbreaking as Siri was in its early days, it has become somewhat of a punchline. Perhaps an LLM will take its place. Whether this new AI assistant will retain the Siri name or adopt another remains uncertain. What are your thoughts? Share in the comments!

If you found this discussion engaging, feel free to explore my other articles or connect with me on LinkedIn. You can also check out my regularly updated repository featuring the latest in ML and AI news. I'm open to collaborations and projects, so don't hesitate to reach out.

For additional resources related to machine learning and artificial intelligence, visit my GitHub repository.

Chapter 2: The Future of Siri and OpenELM

The second video explores the implications of Apple's generative AI efforts, including discussions on privacy, custom models, and the potential for easy-to-build AI applications.

References

  1. Mehta, 2024, OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
  2. Reuters, 2024, Apple renews talks with OpenAI for iPhone generative AI features
  3. Reuters, 2024, Apple in talks to let Google's Gemini power iPhone AI features
  4. Vuruma, 2024, From Cloud to Edge: Rethinking Generative AI for Low-Resource Design Challenges
  5. Groeneveld, 2024, OLMo: Accelerating the Science of Language Models

In Plain English 🚀

Thank you for being a part of the In Plain English community! Be sure to follow us on various platforms for more content and updates.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Navigating the Challenges of Workplace Culture: The Dead Sea Effect

Explore the detrimental effects of workplace culture, including the Peter Principle and Dead Sea Effect, and discover strategies to mitigate them.

Discovering the Formula for Lasting Happiness and Fulfillment

Explore the journey of Mo Gawdat, who developed an algorithm for happiness after a profound personal loss, reshaping our understanding of joy.

Controversial DNA Testing Policies Challenge Jewish Identity in Israel

Israel's new DNA testing policies raise concerns about Jewish identity and inclusion, affecting many who identify as Jewish.