Google’s New Open-Source AI Model, Gemma, Is Introduced

Introducing Google’s open-source AI model, Gemma. Google unveiled a new model family on Wednesday with the goal of “assisting developers and researchers in building AI responsibly.” The same research and technology that went into making Gemini, its closed-source models that drive the Gemini chatbot (previously Bard), and AI-related tools for Workspace (formerly Duet AI) are also used in the construction of Gemma. 

Is Gemma Going To Help Google Win The AI Race?

In 2024, OpenAI and Google are still in a fierce AI war, but OpenAI seems to be ahead. Google releases a new feature or upgrade almost every other week, but OpenAI usually comes out with something better. Google revealed Gemini in December, and OpenAI released GPT shops in less than a month. Google revealed “Gemini 1.5,” a “flashy” update to Gemini, last week. But that was swiftly overshadowed by the announcement of OpenAI’s AI video generator, Sora.

However, OpenAI still hasn’t created any open-source versions of its models—transparency isn’t exactly one of its strong points.) Although Google hasn’t been completely transparent about how its AI models are developed either, the corporation stated in its release that Gemma was built because it believes in “making AI helpful for everyone.”

There are two model weights for this one: Gemma 2B and Gemma 7B. Both are pre-trained and come in variations that are optimized for specific instructions. They may be used with Google Cloud with GPU and TPU acceleration, or on a developer laptop or desktop with CPU and GPU. Only text-to-text support is available with it, in contrast to multimodal Gemini.

However, it “surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs,” according to Google, regarding performance. (It should be noted that at the time of publication, we did not have access to the technical article.) Google is also releasing a new Responsible Generative AI Toolkit in conjunction with it, which includes resources for creating LLMs, debugging, and safety classification.