♊️ GemiNews 🗞️
(dev)
🏡
📰 Articles
🏷️ Tags
🧠 Queries
📈 Graphs
☁️ Stats
💁🏻 Assistant
💬
🎙️
Demo 1: Embeddings + Recommendation
Demo 2: Bella RAGa
Demo 3: NewRetriever
Demo 4: Assistant function calling
Editing article
Title
Summary
Content
<p>Recently Google launched Claude 3 Sonnet model on Google Cloud. To read about how to get started, follow the below blog.</p><p><a href="https://medium.com/google-cloud/claude-3-on-google-cloud-20c65b308f01">Getting Started with Claude 3 on Google Cloud</a></p><p>In this blog, we will focus on how <strong>“Claude 3 Sonnet”</strong> and <strong>“Gemini 1.5 Pro” </strong>perform in a head-on battle where I will provide both with exact same prompt and check which one performs better.</p><blockquote><strong>Disclaimer: While both Claude 3 and Gemini 1.5 Pro achieve similar overall performance, this comparison aims to highlight specific areas where one model might be preferable over the other.</strong></blockquote><blockquote>For easy of writing, I will call “<strong>Claude 3 Sonnet</strong>” as “<strong>Claude</strong>” and “<strong>Gemini 1.5 Pro</strong>” as “<strong>Gemini</strong>” for the context of this blog.</blockquote><h3>Text Prompts Example 1:</h3><p>The First thing I tried was giving both models a very simple prompt, and both were able to give answer quite nicely.</p><h4>Prompt: How to make a Banana Protein Shake in less than 100 words</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eS3Clr9clSwV8OXXCq-j1g.png" /><figcaption>Claude</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bqxrfaBdTZR59qxDtYGJFQ.png" /></figure><p>While the response time on Claude was <strong>~7 sec</strong>, the response from Gemini was <strong>~3 secs</strong>. Also, the <strong>presentaion of response</strong> from Gemini is far better than Claude. Giving a nice title to the response, giving the response in a listed format, adds a nice touch to the overall experience.</p><h3>Text Prompts Example 2:</h3><p>In this Scenario, I tried to give incorrect spelling to understand if the model is able to pick that up and correct the prompt to give me the desired output. Notice the intentional spelling mistake of “<strong>Moana</strong>” to “<strong>Maona</strong>”.</p><h3>Prompt: Tell me about the movie name Maona and who all starred in the movie</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*a5L4RBPmn-W1E5o4DxB3VQ.png" /><figcaption>Claude</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*l1ljo2mJmTLJEwbpz8xEIg.png" /><figcaption>Gemini</figcaption></figure><p>Despite <strong>very close response times</strong>, the outputs from the two models differed dramatically. While Gemini was able to correct the spelling mistake and get the desired output, Claude could not identify even a small spelling mistake and give response which is near to the query.</p><h3>Image Prompt Example 1:</h3><p>In this, I tried to give a single image and a prompt with the image to ask some questions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lnsWMpIz61oidGEiqgktZg.jpeg" /><figcaption>Image used along with prompt</figcaption></figure><h4>Prompt: From the image try to guess the city and weather</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PenvLPi96WsEZYTWnFAVtQ.png" /><figcaption>Claude</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/653/1*ZmLQXcB8QQB1ymMzc9IySQ.png" /><figcaption>Gemini</figcaption></figure><p>In this test, there was a huge difference in time, where Claude is taking <strong>~10 Sec</strong>, Gemini is able to get very precise response in just <strong>~2 Secs</strong>.</p><h3>Image Prompt Example 2:</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/736/0*B1CB2xnXCiE3zZ3p.jpg" /><figcaption>Image used from prompt</figcaption></figure><h4>Prompt: what is the name of character and give more details about him/her</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*B16o8NLA5-QZDgCIeUEaEA.png" /><figcaption>Claude</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OSy5gcao_Kh_u6Lxh1pnAw.png" /><figcaption>Gemini</figcaption></figure><p>In the above Scenario, Despite <strong>very close response times</strong>, the outputs from the two models differed dramatically. Claude 3 did not give any response, saying that it would be against the privacy, which i do not think is the right. The above shown character was part of a famous movie, giving details about the character is not at all a privacy infringement.</p><p>Also this is not specific to Movie characters, Claude is able to predict Famous cartoon characters like Tom and Jerry, Loony Toons, etc. on giving the same prompt as mentioned about, But when given a <a href="https://en.wikipedia.org/wiki/Anime">Anime</a> character, it says that “<em>It would go against respecting individual privacy</em>.”</p><h3>Image Prompt Example 3:</h3><p>Image used from <a href="https://www.youtube.com/watch?app=desktop&v=QPE7zcqcJNU">YouTube</a>. Please refer to the video to understand the Integration steps.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Ve4WjE9ZfT4a63Ia.jpg" /><figcaption>Image used for prompt</figcaption></figure><h4>Prompt: What is this equation for? Solve the problem.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*77Y3ITQM8CypBgXi7ly0XA.png" /><figcaption>Claude</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5E-_M2UcCUV--4yikq0Yuw.png" /><figcaption>Gemini</figcaption></figure><p>In the above Scenario, Gemini was faster compared to Claude and also the response given by Gemini was quite readable. But <strong>Both were not able to solve the problem correctly</strong>. Claude made mistake right from the first step, but Gemini was able to perform all the integration part and it failed on the very last step while doing <strong>(390625/24+15625/6)</strong></p><h3>Chat Prompts:</h3><p>I tried to ask one question and then asked question related to the response from first question. <br>While Gemini has Capability to maintain chat history and give response accordingly, But Claude does not have this capability.</p><p>The response time for each question is between 1 and 2 Secs.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/886/1*V7hMQ3_WkrAaJpG4eHWqkw.png" /><figcaption>Gemini</figcaption></figure><h3>Conclusions:</h3><ol><li><strong>Response Time: </strong>Gemini is Faster as compared to Claude 3 in all cases.</li><li><strong>Accuracy: </strong>Gemini has overall higher accuracy while giving responses.</li><li><strong>Response Quality: </strong>Gemini is able to give Precise answers when asked for it. <em>(as shown in Image Prompt Example 1)</em></li><li><strong>Mathematical Capability: </strong>While both were not able to get to the final answer of the double integration problem, Gemini could reach till the last step, before it failed.</li><li><strong>Chat Capability: </strong>Capability to have a continues chat with Gemini is a big plus point over Claude.</li><li><strong>Spell Correction with Context: </strong>Gemini is able to correct the spelling based by understanding the context from prompt. <em>(as shown in Text Prompt Example 2)</em></li><li><strong>Markdown Answers: </strong>Gemini provides responses with markdown, which gives a good Presentation of the response <em>(as shown in Text Prompt Example 1)</em></li></ol><h3>If you enjoyed this post, give it a clap! 👏 👏</h3><h4>Interested in similar content? Follow me on <a href="https://medium.com/@IVaibhavMalpani">Medium</a>, <a href="https://twitter.com/IVaibhavMalpani">Twitter</a>, <a href="https://www.linkedin.com/in/ivaibhavmalpani/">LinkedIn</a> for more!</h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=39559efcde8e" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/claude-vs-gemini-39559efcde8e">Claude 3 Sonnet 🆚 Gemini 1.5 Pro</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
Author
Link
Published date
Image url
Feed url
Guid
Hidden blurb
--- !ruby/object:Feedjira::Parser::RSSEntry title: Claude 3 Sonnet Gemini 1.5 Pro url: https://medium.com/google-cloud/claude-vs-gemini-39559efcde8e?source=rss----e52cf94d98af---4 author: Vaibhav Malpani categories: - gemini - google-cloud-platform - vertex-ai - claude-3 - multimodal published: 2024-04-08 06:24:26.000000000 Z entry_id: !ruby/object:Feedjira::Parser::GloballyUniqueIdentifier is_perma_link: 'false' guid: https://medium.com/p/39559efcde8e carlessian_info: news_filer_version: 2 newspaper: Google Cloud - Medium macro_region: Blogs rss_fields: - title - url - author - categories - published - entry_id - content content: "<p>Recently Google launched Claude 3 Sonnet model on Google Cloud. To read about how to get started, follow the below blog.</p><p><a href=\"https://medium.com/google-cloud/claude-3-on-google-cloud-20c65b308f01\">Getting Started with Claude 3 on Google Cloud</a></p><p>In this blog, we will focus on how <strong>“Claude 3 Sonnet”</strong> and <strong>“Gemini 1.5 Pro” </strong>perform in a head-on battle where I will provide both with exact same prompt and check which one performs better.</p><blockquote><strong>Disclaimer: While both Claude 3 and Gemini 1.5 Pro achieve similar overall performance, this comparison aims to highlight specific areas where one model might be preferable over the other.</strong></blockquote><blockquote>For easy of writing, I will call “<strong>Claude 3 Sonnet</strong>” as “<strong>Claude</strong>” and “<strong>Gemini 1.5 Pro</strong>” as “<strong>Gemini</strong>” for the context of this blog.</blockquote><h3>Text Prompts Example 1:</h3><p>The First thing I tried was giving both models a very simple prompt, and both were able to give answer quite nicely.</p><h4>Prompt: How to make a Banana Protein Shake in less than 100 words</h4><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*eS3Clr9clSwV8OXXCq-j1g.png\" /><figcaption>Claude</figcaption></figure><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*bqxrfaBdTZR59qxDtYGJFQ.png\" /></figure><p>While the response time on Claude was <strong>~7 sec</strong>, the response from Gemini was <strong>~3 secs</strong>. Also, the <strong>presentaion of response</strong> from Gemini is far better than Claude. Giving a nice title to the response, giving the response in a listed format, adds a nice touch to the overall experience.</p><h3>Text Prompts Example 2:</h3><p>In this Scenario, I tried to give incorrect spelling to understand if the model is able to pick that up and correct the prompt to give me the desired output. Notice the intentional spelling mistake of “<strong>Moana</strong>” to “<strong>Maona</strong>”.</p><h3>Prompt: Tell me about the movie name Maona and who all starred in the movie</h3><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*a5L4RBPmn-W1E5o4DxB3VQ.png\" /><figcaption>Claude</figcaption></figure><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*l1ljo2mJmTLJEwbpz8xEIg.png\" /><figcaption>Gemini</figcaption></figure><p>Despite <strong>very close response times</strong>, the outputs from the two models differed dramatically. While Gemini was able to correct the spelling mistake and get the desired output, Claude could not identify even a small spelling mistake and give response which is near to the query.</p><h3>Image Prompt Example 1:</h3><p>In this, I tried to give a single image and a prompt with the image to ask some questions.</p><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*lnsWMpIz61oidGEiqgktZg.jpeg\" /><figcaption>Image used along with prompt</figcaption></figure><h4>Prompt: From the image try to guess the city and weather</h4><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*PenvLPi96WsEZYTWnFAVtQ.png\" /><figcaption>Claude</figcaption></figure><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/653/1*ZmLQXcB8QQB1ymMzc9IySQ.png\" /><figcaption>Gemini</figcaption></figure><p>In this test, there was a huge difference in time, where Claude is taking <strong>~10 Sec</strong>, Gemini is able to get very precise response in just <strong>~2 Secs</strong>.</p><h3>Image Prompt Example 2:</h3><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/736/0*B1CB2xnXCiE3zZ3p.jpg\" /><figcaption>Image used from prompt</figcaption></figure><h4>Prompt: what is the name of character and give more details about him/her</h4><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*B16o8NLA5-QZDgCIeUEaEA.png\" /><figcaption>Claude</figcaption></figure><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*OSy5gcao_Kh_u6Lxh1pnAw.png\" /><figcaption>Gemini</figcaption></figure><p>In the above Scenario, Despite <strong>very close response times</strong>, the outputs from the two models differed dramatically. Claude 3 did not give any response, saying that it would be against the privacy, which i do not think is the right. The above shown character was part of a famous movie, giving details about the character is not at all a privacy infringement.</p><p>Also this is not specific to Movie characters, Claude is able to predict Famous cartoon characters like Tom and Jerry, Loony Toons, etc. on giving the same prompt as mentioned about, But when given a <a href=\"https://en.wikipedia.org/wiki/Anime\">Anime</a> character, it says that “<em>It would go against respecting individual privacy</em>.”</p><h3>Image Prompt Example 3:</h3><p>Image used from <a href=\"https://www.youtube.com/watch?app=desktop&v=QPE7zcqcJNU\">YouTube</a>. Please refer to the video to understand the Integration steps.</p><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/0*Ve4WjE9ZfT4a63Ia.jpg\" /><figcaption>Image used for prompt</figcaption></figure><h4>Prompt: What is this equation for? Solve the problem.</h4><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*77Y3ITQM8CypBgXi7ly0XA.png\" /><figcaption>Claude</figcaption></figure><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/1024/1*5E-_M2UcCUV--4yikq0Yuw.png\" /><figcaption>Gemini</figcaption></figure><p>In the above Scenario, Gemini was faster compared to Claude and also the response given by Gemini was quite readable. But <strong>Both were not able to solve the problem correctly</strong>. Claude made mistake right from the first step, but Gemini was able to perform all the integration part and it failed on the very last step while doing <strong>(390625/24+15625/6)</strong></p><h3>Chat Prompts:</h3><p>I tried to ask one question and then asked question related to the response from first question. <br>While Gemini has Capability to maintain chat history and give response accordingly, But Claude does not have this capability.</p><p>The response time for each question is between 1 and 2 Secs.</p><figure><img alt=\"\" src=\"https://cdn-images-1.medium.com/max/886/1*V7hMQ3_WkrAaJpG4eHWqkw.png\" /><figcaption>Gemini</figcaption></figure><h3>Conclusions:</h3><ol><li><strong>Response Time: </strong>Gemini is Faster as compared to Claude 3 in all cases.</li><li><strong>Accuracy: </strong>Gemini has overall higher accuracy while giving responses.</li><li><strong>Response Quality: </strong>Gemini is able to give Precise answers when asked for it. <em>(as shown in Image Prompt Example 1)</em></li><li><strong>Mathematical Capability: </strong>While both were not able to get to the final answer of the double integration problem, Gemini could reach till the last step, before it failed.</li><li><strong>Chat Capability: </strong>Capability to have a continues chat with Gemini is a big plus point over Claude.</li><li><strong>Spell Correction with Context: </strong>Gemini is able to correct the spelling based by understanding the context from prompt. <em>(as shown in Text Prompt Example 2)</em></li><li><strong>Markdown Answers: </strong>Gemini provides responses with markdown, which gives a good Presentation of the response <em>(as shown in Text Prompt Example 1)</em></li></ol><h3>If you enjoyed this post, give it a clap! \U0001F44F \U0001F44F</h3><h4>Interested in similar content? Follow me on <a href=\"https://medium.com/@IVaibhavMalpani\">Medium</a>, <a href=\"https://twitter.com/IVaibhavMalpani\">Twitter</a>, <a href=\"https://www.linkedin.com/in/ivaibhavmalpani/\">LinkedIn</a> for more!</h4><img src=\"https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=39559efcde8e\" width=\"1\" height=\"1\" alt=\"\"><hr><p><a href=\"https://medium.com/google-cloud/claude-vs-gemini-39559efcde8e\">Claude 3 Sonnet \U0001F19A Gemini 1.5 Pro</a> was originally published in <a href=\"https://medium.com/google-cloud\">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>"
Language
Active
Ricc internal notes
Imported via /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/import-feedjira.rb on 2024-04-09 05:41:14 -0700. Content is EMPTY here. Entried: title,url,author,categories,published,entry_id,content. TODO add Newspaper: filename = /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/../../../crawler/out/feedjira/Blogs/Google Cloud - Medium/2024-04-08-Claude_3_Sonnet__Gemini_1.5_Pro-v2.yaml
Ricc source
Show this article
Back to articles