Google Launches Gemini 1.5 Pro and Gemini-1.5-Flash-002 AI Models With Faster Output
Image Credits: SOPA Images/LightRokets Via Getty Images

Google Launches Gemini 1.5 Pro and Gemini-1.5-Flash-002 AI Models With Faster Output

Google launched Gemini-1.5-Pro-002 and Flash-002 AI models that work better thanks to faster output, higher rate limits, and better filters.

Google updated the Gemini 1.5 Pro form of artificial intelligence (AI) on Tuesday. Just a few months ago, the tech giant from Mountain View launched the last version of Gemini, which added up to 2 million tokens to the context window. The names of these AI models are Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002.

The company said they give higher output, lower costs, and a higher rate limit to users. The filter settings have also been changed to make the AI models follow the directions better.

Enhanced Features of Gemini-1.5-Pro-002 and Flash-002 Models

The company wrote about the Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002 AI models in a blog post. These models can be downloaded right now as experimental versions. They are based on Gemini 1.5 Pro, which came out at Google I/O in May. These are currently only available to developers and large business users of the company. It is free for developers to use in Google AI Studio and the Gemini API. Businesses can use Vertex AI to get to it.

According to tests done by Google, the new Gemini 1.5 Pro and Flash models also did better than the Gemini model from the previous version. The company said the new models saw a 7% rise in the MMLU-Pro measure for Massive Multitask Language Understanding. Also, it is said that the AI models are about 20% better on MATH and HiddenMath scores than Gemini 1.5 Pro.

There will also be a higher rate cap on the Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002 AI models. Rate limits are the daily limits on how much a person can use. This type, 1.5 Flash, gives users 2,000 requests per minute (RPM), while 1.5 Pro models give users 1,000 RPM. Google said these limits are going up so developers can use the new versions of Gemini to build things.

A lot of things are getting better, not just the rate cap. The company also increased the number of output tokens per second for the Gemini-1.5-Pro-002 and Flash-002. This makes the models more quick and speeds up the time it takes to make long blocks of text.

One important change with these AI models is that the filters are better now. Google said that the new Gemini AI models will better listen to hints and follow directions because these filters have been updated. Also, Google is making its safety features better to make sure that the AI models aren’t making anything bad. In new AI models, the usual filters will not be used, so developers can choose the configuration they like best.

Leave a Reply

Your email address will not be published. Required fields are marked *

Microsoft Launches New 'Correction' Tool to Fix Inaccurate AI Content Previous post Microsoft Launches New ‘Correction’ Tool to Fix Inaccurate AI Content