Google unveiled Gemma, a new lightweight open-source family of artificial intelligence (AI) models. Developers and researchers now have access to two Gemma variants: Gemma 2B and Gemma 7B. The tech giant claimed to have developed Gemma using the same tools and research as its Gemini AI models. The Gemini 1.5 model was unveiled last week. These smaller language models can be used to create AI solutions tailored to specific tasks, and the company permits distribution and responsible commercial use.
Google CEO Sundar Pichai announced Gemma, a new set of AI models, on X (formerly Twitter). He mentioned that Gemma performs well on language tasks and is available worldwide in two sizes (2B and 7B). Gemma supports various tools and systems, and developers can run it on laptops, workstations, or Google Cloud. Additionally, Google has created a landing page specifically for developers, offering quickstart links and code examples on its Kaggle Models page. Developers can also deploy AI tools swiftly using Vertex AI or experiment with Gemma on separate domains using Collab.
Google stated that both of the Gemma AI model variations are pre-trained and instruction-tuned. It has been integrated with widely popular data repositories such as TensorRT-LLM, MaxText, NVIDIA NeMo, and Hugging Face. With Vertex AI and Google Kubernetes Engine (GKE), the language models can operate on workstations, laptops, or Google Clouds. To assist developers in creating safe and responsible AI tools, the tech giant also released a new Responsible Generative AI Toolkit.
According to reports given by Google, Gemma has surpassed Meta’s Llama-2 language model on several significant benchmarks, including BIG-Bench Hard (BBH), HumanEval, HellaSwag, and Massive Multitask Language Understanding (MMLU). Additionally, several sources indicate that Meta has already started working on Llama-3.
A growing trend in the field of artificial intelligence is the release of open-source smaller language models for researchers and developers. Existing open-source initiatives include Stability, Meta, MosaicML, and even Google with their Flan-T5 models.
The potential for developers and data scientists outside of AI companies to experiment with the technology and produce original tools contributes to the development of an ecosystem. Also, the adoption process by developers frequently draws attention to shortcomings in the algorithm or training data that would have gone undetected prior to release, enabling businesses to enhance their models.
- Apple has Unveiled an Open-Source LLM Model - July 31, 2024
- Anthropic Has Released Claude 3.5 Sonnet to Rival GPT-4o and More - July 1, 2024
- China’s Text-to-Video AI Tool Emerges as a Competitor to Sora - June 24, 2024