<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI models Updates | BeRightNews</title>
	<atom:link href="https://berightnews.com/tag/ai-models/feed/" rel="self" type="application/rss+xml" />
	<link></link>
	<description>Latest International News &#38; Sports Updates</description>
	<lastBuildDate>Sat, 04 Apr 2026 21:16:31 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Gemma: Google Unveils  4: A New Era in AI Models</title>
		<link>https://berightnews.com/2026/04/05/gemma-google-unveils-4-a-new-era-in/</link>
		
		<dc:creator><![CDATA[newsroom]]></dc:creator>
		<pubDate>Sat, 04 Apr 2026 21:16:31 +0000</pubDate>
				<category><![CDATA[Trending]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Gemma 4]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Web3]]></category>
		<category><![CDATA[ZetaChain]]></category>
		<guid isPermaLink="false">https://berightnews.com/2026/04/05/gemma-google-unveils-4-a-new-era-in/</guid>

					<description><![CDATA[<p>Google has officially launched Gemma 4, a groundbreaking AI model that enhances multi-step planning and logic capabilities for developers.</p>
<p>The post <a href="https://berightnews.com/2026/04/05/gemma-google-unveils-4-a-new-era-in/">Gemma: Google Unveils  4: A New Era in AI Models</a> appeared first on <a href="https://berightnews.com">berightnews</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>How it unfolded</h2>
<p>On April 2, 2026, Google made a significant leap in artificial intelligence by unveiling Gemma 4, a new generation of open models designed for multi-step planning and deep logic. This development sets the stage for a transformative shift in how developers engage with AI technologies, providing them with powerful tools to create sophisticated applications.</p>
<p>Gemma 4 is built on the same research and technology foundation as its predecessor, Gemini 3, and is released under the Apache license 2.0. This strategic move by Google aims to offer developers a robust framework for building autonomous agents capable of interacting with tools and application programming interfaces (APIs).</p>
<p>The new model comes in four different sizes: Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE), and 31B Dense. Each variant is designed to cater to different computational needs, ensuring that developers can select the model best suited for their applications. Notably, the edge models feature a 128K context window, while the larger models support up to 256K context, enhancing their ability to process complex information.</p>
<p>Since the launch of its first generation, Gemma has been downloaded over 400 million times, demonstrating its widespread adoption and the demand for advanced AI solutions. Google emphasized that Gemma 4 complements their Gemini models, providing developers with the industry&#8217;s most powerful combination of both open and proprietary tools.</p>
<p>In a remarkable display of agility, ZetaChain integrated Google’s Gemma 4 AI model within a single day of its official release. This rapid integration allows smart contracts and decentralized applications to leverage sophisticated AI reasoning and generation capabilities, marking a significant acceleration in the convergence of advanced artificial intelligence and Web3 infrastructure.</p>
<p>Moreover, AMD announced Day One support for Gemma 4 across its Radeon GPUs, Instinct datacenter GPUs, and Ryzen AI CPUs, ensuring that developers have access to efficient AI tools optimized for various hardware configurations. This collaboration is expected to enhance the performance of applications built on the Gemma 4 framework.</p>
<p>Gemma 4 achieves benchmark scores competitive with models that have vastly more parameters, showcasing its efficiency and effectiveness. Additionally, it is natively trained on more than 140 languages, making it a versatile tool for developers working in diverse linguistic environments. As the landscape of AI continues to evolve, Gemma 4 stands out as a pivotal development that empowers developers to push the boundaries of what is possible with artificial intelligence.</p>
<p>The post <a href="https://berightnews.com/2026/04/05/gemma-google-unveils-4-a-new-era-in/">Gemma: Google Unveils  4: A New Era in AI Models</a> appeared first on <a href="https://berightnews.com">berightnews</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Gemma 4 AI: A New Era in On-Device AI Technology</title>
		<link>https://berightnews.com/2026/04/03/gemma-4-ai-a-new-era-in-on/</link>
		
		<dc:creator><![CDATA[newsroom]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 19:43:12 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[autonomous agents]]></category>
		<category><![CDATA[code generation]]></category>
		<category><![CDATA[Gemma 4 AI]]></category>
		<category><![CDATA[Google DeepMind]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[speech recognition]]></category>
		<guid isPermaLink="false">https://berightnews.com/2026/04/03/gemma-4-ai-a-new-era-in-on/</guid>

					<description><![CDATA[<p>Gemma 4 AI, launched by Google DeepMind, introduces cutting-edge models that support over 140 languages and enable advanced functionalities.</p>
<p>The post <a href="https://berightnews.com/2026/04/03/gemma-4-ai-a-new-era-in-on/">Gemma 4 AI: A New Era in On-Device AI Technology</a> appeared first on <a href="https://berightnews.com">berightnews</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2></h2>
<p>Just before the launch of Gemma 4 AI, anticipation was building in the tech community. Google DeepMind was set to unveil its latest family of models, promising to revolutionize on-device AI capabilities.</p>
<p>On the launch date, Gemma 4 was introduced as a state-of-the-art family of open models, capable of supporting over <strong>140 languages</strong>. This significant development marks a milestone in AI accessibility and usability.</p>
<p>Gemma 4 is available under the Apache 2.0 license, allowing developers to freely utilize and modify the technology. This open-source approach is expected to foster innovation and collaboration across various sectors.</p>
<p>The models are equipped with advanced features, including multi-step planning, autonomous action, offline code generation, and audio-visual processing. Notably, the E2B and E4B models support native audio input for enhanced speech recognition.</p>
<p>As of now, Gemma 4 can achieve a prefill throughput of <strong>133 tokens per second</strong> on a Raspberry Pi 5, showcasing its efficiency even on lower-end devices. Additionally, the models are optimized for various hardware, from billions of Android devices to developer workstations.</p>
<p>Gemma 4 features a remarkable <strong>128K context window</strong>, allowing for the processing of long-form content, which is crucial for applications requiring extensive data analysis.</p>
<p>Moreover, the models include versions sized at <strong>26B</strong> and <strong>31B</strong>, specifically optimized for different hardware configurations. This flexibility enables developers to build autonomous agents that can interact with various tools and APIs effectively.</p>
<p>The implications of Gemma 4 are significant for developers and researchers alike. As one spokesperson noted, &#8220;The era of agentic experiences on-device is here, and we hope you are excited to start building on the edge.&#8221; This sentiment reflects the potential for transformative applications across industries.</p>
<p>Furthermore, the ability to generate high-quality offline code positions Gemma 4 as a valuable asset for developers, turning workstations into local-first AI code assistants.</p>
<p>As the technology continues to evolve, the focus remains on empowering users and developers to harness the full potential of AI in their projects. The launch of Gemma 4 AI is not just a technological advancement; it represents a shift towards more accessible and efficient AI solutions.</p>
<p>Details remain unconfirmed regarding future updates and enhancements, but the current capabilities of Gemma 4 AI are already setting a new standard in the field.</p>
<p>The post <a href="https://berightnews.com/2026/04/03/gemma-4-ai-a-new-era-in-on/">Gemma 4 AI: A New Era in On-Device AI Technology</a> appeared first on <a href="https://berightnews.com">berightnews</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Chatgpt 5.4</title>
		<link>https://berightnews.com/2026/03/06/chatgpt-5-4-news/</link>
		
		<dc:creator><![CDATA[newsroom]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 08:38:47 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[enterprise]]></category>
		<category><![CDATA[GPT 5.4]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">https://berightnews.com/2026/03/06/chatgpt-5-4-news/</guid>

					<description><![CDATA[<p>OpenAI has launched chatgpt 5.4, introducing new models designed for enterprise work with enhanced accuracy and efficiency.</p>
<p>The post <a href="https://berightnews.com/2026/03/06/chatgpt-5-4-news/">Chatgpt 5.4</a> appeared first on <a href="https://berightnews.com">berightnews</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>OpenAI has been in competition with Anthropic, which has gained popularity with its Claude model. In this landscape, OpenAI has now released GPT 5.4 Thinking and GPT 5.4 Pro models, marking a significant advancement in their AI offerings.</p>
<h2>New Developments</h2>
<p>GPT 5.4 Thinking is specifically designed for enterprise work, focusing on tasks such as coding and overseeing AI agents. OpenAI describes GPT 5.4 as its most factual model yet, boasting 18% fewer errors and 33% fewer false claims compared to its predecessor, GPT 5.2.</p>
<p>The new models are available for paying ChatGPT users and through the API, enhancing accessibility for professionals. The Thinking version of GPT 5.4 showcases an upfront plan of its reasoning, allowing users to adjust responses mid-way, which is a notable feature for improving user interaction.</p>
<h2>Performance Improvements</h2>
<p>GPT 5.4 supports up to 1 million tokens of context, enabling long-horizon planning and execution. Performance benchmarks indicate that GPT 5.4 scored 83% wins or ties on GDPval and achieved 75% on OSWorld-Verified, a significant increase from 47.3% for GPT 5.2.</p>
<p>OpenAI claims that these advancements make answers more relevant and reduce the need for multiple turns in conversation, positioning GPT 5.4 as a step forward in making AI more reliable for professional work.</p>
<h2>Future Implications</h2>
<p>OpenAI&#8217;s CEO has clarified that safeguards will be implemented to prevent the use of GPT 5.4 by intelligence agencies, following a $200 million deal with the defense department in 2025. The Pro version of GPT 5.4 is aimed at professionals needing maximum performance on complex tasks.</p>
<p>According to OpenAI, GPT 5.4 can more efficiently support agentic activity, which means it uses less computing power and therefore costs less money. This translates into faster developer workflows, more reliable agents, and higher-quality outputs across ChatGPT, the API, and Codex.</p>
<p>The post <a href="https://berightnews.com/2026/03/06/chatgpt-5-4-news/">Chatgpt 5.4</a> appeared first on <a href="https://berightnews.com">berightnews</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
