Meeting Mycroft: An Open AI Platform You Can Order Around By Voice

Mycroft developer Ryan Sipes, speaking from the show floor of this year's OSCON in Austin, Texas...

(see our video interview here), says that what started out as a weekend project to use voice input and some light AI to locate misplaced tools in a makerspace morphed into a much more ambitious, and successfully crowd-funded, project -- hosted at the Lawrence Center for Entrepreneurship in Lawrence, Kansas -- when he and his fellow developers realized that the state of speech recognition and interfaces to exploit it were in a much more rudimentary state than they initially assumed.

How ambitious? Mycroft bills itself as "an open hardware artificial intelligence platform"; the goal is to allow you to "interact with everything in your house, and interact with all your data, through your voice." That's a familiar aim of late, but mostly from a shortlist of the biggest names in technology.  Apple's Siri is exclusive to (and helps sell) Apple hardware; Google's voice interface likewise sells Android phones and tablets, and helps round out Google's apps-and-interfaces-for-everything approach. Amazon and Microsoft have poured resources into voice recognition systems, too -- Amazon's Echo, running the company's Alexa voice service, is probably the most direct parallel to the Mycroft system that was on display at OSCON, in that it provides a dedicated box loaded with mics and a speaker system for 2-way voice interaction.

The Mycroft system, though, is based on two of the first names in open hardware -- Raspberry Pi and Arduino --  and it's meant to be and stay open; all of its software is released under GPL v3. The initial hardware for Mycroft includes RCA ports, as well as an ethernet jack, 4 USB ports, HDMI, and dozens of addressable LEDs that form Mycroft's "face." That HDMI output might not be immediately useful, but Sipes points out that the the hardware is powerful enough to play Netflix films, or multimedia software like Kodi, and to control them by voice. Unusually for a consumer device, even one aimed at hardware hackers,  Mycroft also includes an accessible ribbon-cable port, for users who'd like to hook up a camera or some other peripheral. Two other "ports" (of a sort) might appeal to just those kind of users, too: if you pop out the plugs emblazoned with the OSI Open Hardware logo, two holes on each side of Mycroft's case facilitate adding it to a robot body or other mounting system. 

The open-source difference in Mycroft isn't just in the hacker-friendly hardware. The real star of the show is the software (Despite the hardware on offer, "We're a software company," says Sipes), and that's proudly open as well. The Python-based project is drawing on, and creating, open source back-end tools, but not tied to any particular back-end for interpreting or acting on the voice input it receives. The team has open sourced several tools so far: the Adapt intent parser, text-to-speech engine Mimic (based on a fork of CMU's Flite), and open speech-to-text engine OpenSTT.

The commercial projects named above (Siri, et al) may offer various degrees of privacy or extensibility, but ultimately they all come from "large companies that work really hard to mine your data" and to keep each user in a silo, says Sipes. By contrast, "We're like Switzerland." With Mycroft the speech recognition and speech synthesis tools are swappable, and there's an active dev community adding new voice-activated capabilities ("skills") to the system.

And if you can program Python, your idea could be next.  

Views: 319

Comment

You need to be a member of Codetown to add comments!

Join Codetown

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

Five AI Security Myths Debunked at InfoQ Dev Summit Munich

Katharine Jarmul challenged five common AI security and privacy myths in her InfoQ Dev Summit Munich 2025 keynote: that guardrails will protect us, better model performance improves security, risk taxonomies solve problems, one-time red teaming suffices, and the next model version will fix current issues. She said that current approaches to AI safety rely too heavily on technical solutions.

By Karsten Silz

Google Cloud Demonstrates Massive Kubernetes Scale with 130,000-Node GKE Cluster

The team behind Google Kubernetes Engine (GKE) revealed that they successfully built and operated a Kubernetes cluster with 130,000 nodes, making it the largest publicly disclosed Kubernetes cluster to date.

By Craig Risi

Presentation: Securing AI Assistants: Strategies and Practices for Protecting Data

Andra Lezza explains the criticality of data security for AI copilots, detailing the OWASP AI Exchange threat model and the OWASP Top 10 LLM risks. She reviews two copilot architectures - independent (single domain) and integrated (multi-tenant) - listing specific threats, controls, and best practices like granular authorization, templates, and DevSecOps to secure the entire AI data supply chain.

By Andra Lezza

Podcast: Platform Engineering for AI: Scaling Agents and MCP at LinkedIn

QCon AI New York Chair Wes Reisz talks with LinkedIn’s Karthik Ramgopal and Prince Valluri about enabling AI agents at enterprise scale. They discuss how platform teams orchestrate secure, multi-agentic systems, the role of MCP, the use of foreground and background agents, improving developer experience, and reducing toil.

By Karthik Ramgopal, Prince Valluri

Durable Functions and Werner Vogels’ Last Keynote: Highlights of AWS re:Invent 2025

The 2025 edition of re:Invent recently took place in Las Vegas. As anticipated, AI was a significant focus of the keynotes, but the community was more intrigued by announcements in the serverless space, including Lambda Managed Instances and Lambda Durable Functions. The conference marked the final keynote for Amazon CTO Werner Vogels after 14 years.

By Renato Losio

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service