Bing with ChatGPT also gave wrong answers during his presentation

Bing with ChatGPT also gave wrong answers during his presentation

Updated: 1 month, 16 days, 6 hours, 43 minutes, 38 seconds ago

The new Bing with ChatGPT is not as accurate as it seems. An investigation has revealed that the chatbot that Microsoft will begin to include in search results, gave a number of wrong answers during the demo that the company carried out a few days ago, when they announced the integration of the AI ​​developed by OpenAI in their browser and in Edge, their browser.

According to Dmitri Brereton, an independent AI researcher, the Bing chatbot (powered by ChatGPT) It went so far as to show more than one wrong answer to a series of questions that Microsoft asked during the demo.. One of these questions is related to the financial results of Gap (a clothing brand). The AI ​​responded as follows.

“Gap Inc. reported a gross margin of 37.4%, adjusted for impairment charges related to Yeezy Gap, and merchandise margin decreased 370 basis points compared to last year due to higher discounts and inflationary price increases. of the raw materials”.

A simple Google search is all it takes to find Gap’s financial results for its last fiscal quarter and realize that, in fact, that 37.4% mentioned by ChatGPT on Bing corresponds to the gross margin without adjustingand that the adjusted for impairment charges is actually 38.7% and, therefore, that the merchandise margin decreased 480 points instead of 370.

Bing’s AI powered by GPT-3 also says that Gap “reported an operating margin of 5.9%, adjusted for impairment charges and restructuring costs”, a percentage that does not even appear in the official company document. The clothing firm ensures that the operating margin, including deterioration, is 4.6%, and 3.9% excluding deterioration.

Microsoft takes into account that Bing’s AI can give wrong answers Microsoft Bing

The researcher has also shown that the Bing chatbot, in a way, he makes up some answers that can easily be found after a Google search. Among them, he affirms that a vacuum cleaner is noisy, has a very short cable and its suction is very limited. The same model that the IA mentions, however, it stands out for its low noise level and is completely wireless.

Microsoft, for its part, has affirmed to The Verge that already counted on the AI ​​offering inaccurate answers during the testing phase. “We expect the system to be able to make mistakes during this preview period, and feedback is critical to help identify where things aren’t working well so we can learn and help the models improve,” says Caitlin Roulston, director of Microsoft communications.

Meanwhile, users they keep finding blunders in the answers. Bing’s AI, for example, has come to ensure that we are in 2022, or has named itself “Sydney”, a code name that Microsoft used during the development of the chatbot.

Interestingly, Bard, the Google AI that competes against ChatGPT, also gave an incorrect answer during an official demo. The chatbot that the Mountain View company plans to integrate into its search engine claimed that the James Webb captured the first photo of an exoplanetwhen the first image was taken in 2004 thanks to the Very Large Telescope (VLT).

Also in Hypertext:

Technological and scientific news in 2 minutes

Receive our newsletter every morning in your email. A guide to understand in two minutes the keys to what is really important in relation to technology, science and digital culture.


Ready! you are already subscribed

There was an error, please refresh the page and try again