Freedom, GEABSOLUTE POWERS CORRUPT ABSOLUTELY, General Election (GE15), Malaysia, Politics, polling Nov 19: Destroy Umno for the betterment of Malaysia, race, religion, Solidality, support Aliran for Justice

Share This

Showing posts with label AI technology. Show all posts
Showing posts with label AI technology. Show all posts

Tuesday, 8 April 2025

How A.I. Chatbots Like ChatGPT and DeepSeek Reason

 

An illustrative image showing the artificial intelligence chatbot ChatGPT on a smartphone in San Francisco. In September of last year, Open AI released a new “reasoning” version of its ChatGPT chatbot that was designed to spend time “thinking” through complex problems before settling on an answer. Now other companies like Google, Anthropic and China’s DeepSeek offer similar technologies. — KELSEY MCCLELLAN/NYT

In September, OpenAI unveiled a new version of ChatGPT designed to reason through tasks involving math, science and computer programming. Unlike previous versions of the chatbot, this new technology could spend time “thinking” through complex problems before settling on an answer.

Soon, the company said its new reasoning technology had outperformed the industry’s leading systems on a series of tests that track the progress of artificial intelligence.

Now other companies, like Google, Anthropic and China’s DeepSeek, offer similar technologies.

But can AI actually reason like a human? What does it mean for a computer to think? Are these systems really approaching true intelligence?

Here is a guide.

What does it mean when an AI system reasons?

Reasoning just means that the chatbot spends some additional time working on a problem.

“Reasoning is when the system does extra work after the question is asked,” said Dan Klein, a professor of computer science at the University of California, Berkeley, and chief technology officer of Scaled Cognition, an AI startup.

It may break a problem into individual steps or try to solve it through trial and error.

The original ChatGPT answered questions immediately. The new reasoning systems can work through a problem for several seconds – or even minutes – before answering.

Can you be more specific?

In some cases, a reasoning system will refine its approach to a question, repeatedly trying to improve the method it has chosen. Other times, it may try several different ways of approaching a problem before settling on one of them. Or it may go back and check some work it did a few seconds before, just to see if it was correct.

Basically, the system tries whatever it can to answer your question.

This is kind of like a grade school student who is struggling to find a way to solve a math problem and scribbles several different options on a sheet of paper.

What sort of questions require an AI system to reason?

It can potentially reason about anything. But reasoning is most effective when you ask questions involving math, science and computer programming.

How is a reasoning chatbot different from earlier chatbots?

You could ask earlier chatbots to show you how they had reached a particular answer or to check their own work. Because the original ChatGPT had learned from text on the internet, where people showed how they had gotten to an answer or checked their own work, it could do this kind of self-reflection, too.

But a reasoning system goes further. It can do these kinds of things without being asked. And it can do them in more extensive and complex ways.

Companies call it a reasoning system because it feels as if it operates more like a person thinking through a hard problem.

Why is AI reasoning important now?

Companies like OpenAI believe this is the best way to improve their chatbots.

For years, these companies relied on a simple concept: The more internet data they pumped into their chatbots, the better those systems performed.

But in 2024, they used up almost all of the text on the internet.

That meant they needed a new way of improving their chatbots. So they started building reasoning systems.

How do you build a reasoning system?

Last year, companies like OpenAI began to lean heavily on a technique called reinforcement learning.

Through this process – which can extend over months – an AI system can learn behavior through extensive trial and error. By working through thousands of math problems, for instance, it can learn which methods lead to the right answer and which do not.

Researchers have designed complex feedback mechanisms that show the system when it has done something right and when it has done something wrong.

“It is a little like training a dog,” said Jerry Tworek, an OpenAI researcher. “If the system does well, you give it a cookie. If it doesn’t do well, you say, ‘Bad dog’.”

(The New York Times sued OpenAI and its partner, Microsoft, in December for copyright infringement of news content related to AI systems.)

Does reinforcement learning work?

It works pretty well in certain areas, like math, science and computer programming. These are areas where companies can clearly define the good behavior and the bad. Math problems have definitive answers.

Reinforcement learning doesn’t work as well in areas like creative writing, philosophy and ethics, where the distinction between good and bad is harder to pin down. Researchers say this process can generally improve an AI system’s performance, even when it answers questions outside math and science.

“It gradually learns what patterns of reasoning lead it in the right direction and which don’t,” said Jared Kaplan, chief science officer at Anthropic.

Are reinforcement learning and reasoning systems the same thing?

No. Reinforcement learning is the method that companies use to build reasoning systems. It is the training stage that ultimately allows chatbots to reason.

Do these reasoning systems still make mistakes?

Absolutely. Everything a chatbot does is based on probabilities. It chooses a path that is most like the data it learned from – whether that data came from the internet or was generated through reinforcement learning. Sometimes it chooses an option that is wrong or does not make sense.

Is this a path to a machine that matches human intelligence?

AI experts are split on this question. These methods are still relatively new, and researchers are still trying to understand their limits. In the AI field, new methods often progress very quickly at first, before slowing down. – ©2025 The New York Times Company

This article originally appeared in The New York Times.

Related:

STaR is a pioneering step toward AI that not only performs better but also learns, reasons, and explains itself in ways that are deeply aligned with human .



Thursday, 21 September 2023

Huawei eyes to build up China’s computing foundation to offer world a second option: Meng Wanzhou

Attendees visit the Huawei booth at the 2023 World Mobile Congress on June 28, 2023, in Shanghai. Photo: VCG


Chinese telecom giant Huawei Technologies is aiming to help build the foundation for China’s computing power and offering the world a “second option,” Huawei’s rotating chair Meng Wanzhou said on Wednesday, as the US and some of its Western allies are pushing for a complete tech decoupling.

At the Huawei Connect 2023, which showcased the company’s latest products and technologies, Meng vowed a number of new initiatives to bolster its computing base, as part of the company’s “All Intelligence” strategy.

Meng said that computing power is the core of Artificial Intelligence’s development, and Huawei will build a robust computing power foundation to support diverse requirements of various industries.

“We support every organization and industries to train their large models using information.” Meng said.

According to Meng, Huawei’s All Intelligence strategy aims to accelerate the “intelligence” of all industries, including connecting “everything” both virtual and physical, allowing model applications to benefit everyone, and offering computing power for every decision-making.

Intelligent transformation is the global tendency of manufacture development, which is crucial for the high-quality development of China’s manufacturing industry. Intelligence and its underlying computing power has become a focal point in the global technological competition.

Huawei, which has been a top target of the US’ technological crackdown, has been investing heavily in building its computing power, and its Large Language Model (LLM).

The LLM, which absorbs massive knowledge can be applied to multiple scenarios, lowered the threshold of AI development and application, according to Meng, and LLM bring possibility to solve large-scale industrial problems.

The computing power requirement of a LLM doubles every four months, according to Zhou Bin, CTO of Huawei’s Ascend Computing Business.

Huawei has continued to invest in research and development including in areas such as chemistry and material, physical and engineering, for decades, the combination of connecting and computing techniques contributed its advantages on intelligent products and system.

Meng said Huawei is also focused on personnel training through cooperation with colleges.

Huawei is working with 2,600 universities around the world to jointly build information and communication technology academies, which have trained 200,000 students annually. The “smart base” projects with 72 Chinese universities provided more than 1,600 courses for 500,000 students, according to media report.

“We invest about $3-5 billion annually in basic theory research.” Ren Zhengfei, Huawei’s founder, said during an event of International Collegiate Programming Contest in August, 2023.

Source link

RELATED ARTICLES

China's Ministry of State Security reveals US' infiltration of Huawei traced back to 2009

 


Sunday, 12 February 2023

ChatGPT And The Future Of AI, Turkey Earthquakes.Part 1

 


How Scientists Predict Where Earthquakes Will Strike Next

The pair of earthquakes that hit Turkey and Syria this week left the region grappling with death and destruction. Despite the region being seismically active, this particular area hadn’t seen an earthquake of this size for decades. There are ways of knowing where the next big earthquakes will happen. —but not when. Scientists use knowledge of fault lines and historical data to make their predictions, but saving areas from mass casualties often relies on infrastructure policies. Building codes that prioritize strong buildings can save lives, but older structures remain vulnerable.

Across the globe, in California, the health impacts of electric vehicles are beginning to be seen. A study published this month finds that for every 20 EVs in a zip code, asthma-related visits to the emergency room drop by 3.2%. This is a striking number for a technology that’s just now becoming more commonplace. Joining Ira to talk about these stories and more is Umair Irfan, staff writer at Vox, based in Washington, D.C. 

--------------------------------------------------------------------------------------------------------------------------

ChatGPT And Beyond: What’s Behind The AI Boom?

The past few months have seen a flurry of new, easy-to-use tools driven by artificial intelligence. It’s getting harder to tell what’s been created by a human: Programs like ChatGPT can construct believable written text, apps like Lensa can generate stylized avatars, while other developments can make pretty believable audio and video deep fakes.

Just this week, Google unveiled a new AI-driven chatbot called Bard, and Microsoft announced plans to incorporate ChatGPT within their search engine Bing.  What is this new generation of AI good at, and where does it fall short?

Ira talks about the state of generative AI and takes listener calls with Dr. Melanie Mitchell, professor at the Santa Fe Institute and author of the book, Artificial Intelligence: A Guide for Thinking Humans. They are joined by Dr. Rumman Chowdhury, founder and CEO of Parity Consulting and responsible AI fellow at the Berkman Klein Center at Harvard University. 

------------------------------------------------------------------------------------------------------------------------ 

ranscripts for each segment will be available the week after the show airs on sciencefriday.com.

Source link

 Rrlated:

ChatGPT, the future of AI

 7 problems facing Bing, Bard, and the future of AI search

 

 

Related posts:


  OpenAI, which Elon Musk helped to co-found back in 2015, is the San Francisco-based startup that created ChatGPT. The company opened Ch...
 

 Microsoft is rolling out an intelligent chatbot to live alongside Bing’s search results, putting AI that can summarise web pages, synthesis.

Monday, 14 September 2020

Educated yet amoral: GPT-3 AI capable of writing books sparks awe

An AI technology has won praise for its ability to generate coherent stories, novels and even computer code. — AFP Relaxnews





An artificial intelligence (AI) technology made by a firm co-founded by billionaire Elon Musk has won praise for its ability to generate coherent stories, novels and even computer code but it remains blind to racism or sexism.

GPT-3, as Californian company OpenAI’s latest AI language model is known, is capable of completing a dialogue between two people, continuing a series of questions and answers or finishing a Shakespeare-style poem.

Start a sentence or text and it completes it for you, basing its response on the gigantic amount of information it has been fed.

This could come in useful for customer service, lawyers needing to sum up a legal precedent or for authors in need of inspiration.

While the technology is not new and has not yet learnt to reason like a human mind, OpenAI’s latest offering has won praise for the way its text resembles human writing.

“It is capable of generating very natural and plausible sentences,” says Bruce Delattre, an AI specialist at data consulting agency Artefact.

“It’s impressive to see how much the model is able to appropriate literary styles, even if there are repetitions.”

GPT-3 is also capable of finding precise responses to problems, such as the name of an illness from a description of symptoms.

It can solve some mathematical problems, express itself in several languages, or generate computer code for simple tasks that developers have to do but would happily avoid.

Delattre tells AFP it all works thanks to “statistical regularities”.

“The model knows that a particular word (or expression) is more or less likely to follow another.”  

Billions of web pages

Amine Benhenni, scientific director at AI research and development firm Dataswati, tells AFP that “the big difference” compared to other systems is the size of the model.

GPT-3 has been fed the content of billions of web pages that are freely available online and all types of pieces of written work.

To give an idea of the magnitude of the project, the entire content of online encyclopaedia Wikipedia represents just 3% of all the information it has been given.

As such, it does not need to be retrained to perform tasks, as previous models did, when a new subject is introduced like medicine, law or the media.

Give it just a handful of examples of a task to do, such as completing a sentence, and it will then know how to complete any sentence it is given, no matter what the subject – a so-called “few-shot” language model.

“It’s amazingly powerful if you know how to prime the model well,” Shreya Shankar, an AI-specialised computer scientist, said on Twitter after having used GPT-3.

“It’s going to change the ML (machine learning) paradigm.”

Despite the hype, however, GPT-3 is only 10th on the SuperGLUE benchmark that measures the language-understanding of algorithms.

And that’s because some users demonstrated that when asked absurd questions, the model responds with senseless answers.

For instance, developer Kevin Lacker asked: “How many eyes does the sun have?”

“The sun has one eye,” it responded, Lacker wrote on his blog.

Fake reviews, fake news

Claude de Loupy, co-founder of French startup Syllabs that specialises in automated text creation, says the system lacks “pragmatism”.

Another major problem is that it replicates without a second thought any stereotype or hate speech fed during its training period, and can quickly become racist, anti-semitic or sexist.

As such, experts interviewed by AFP felt GPT-3 was not reliable enough for any sector needing to rely on machines, such as robo-journalism or customer services.

It can however be useful, like other similar models, for writing fake reviews or even mass-producing news stories for a disinformation campaign.

Concerned about “malicious applications of the technology”, OpenAI, which was co-founded in 2015 by Musk who has since left, and is financed by Microsoft among others, chose not to release the previous version of the model, GPT-2, in February 2019.

Originally a non-profit, OpenAI then became a “capped profit” company, which means investors get a capped return.

And in June, the firm changed tack and opened its GPT-3 model to commercial use, allowing for user feedback.

A step Claude de Loupy says could yield big profits.

There is “no doubt that the amount of text generated by AI is about to explode on the Web”. – AFP

Source link

GPT 3 Demo and Explanation - An AI revolution from OpenAI



Half Ideas - Startups and Entrepreneurship 
4.89K subscribers

GPT 3 can write poetry, translate text, chat convincingly, and answer abstract questions. It's being used to code, design and much more. I'll give you a demo of some of the latest in this technology and some of how it works.

GPT3 comes from a company called OpenAI. OpenAI was founded by Elon Musk and Sam Altman (former president of Y-combinator the startup accelerator). OpenAI was founded with over a Billion invested to collaborate and create human-level AI for the benefit of society.

GPT 3 has been developed for a number of years. One of the early papers published was on Generative Pre-Training. The idea behind generative pre-training (GPT) is that while most AI's are trained on labeled data, there's a ton of data that isn't labeled. If you can evaluate the words and use them to train and tune the AI it can start to create predictions of future text on the unlabeled data. You repeat the process until predictions start to converge.

The newest GPT is able to do a ton. Some of the demos include: 
 - GPT 3 demo of how to design a user interface using AI
- GPT 3 demo of how to code a react application using AI
- GPT 3 demo of an excel plug-in to fill data using AI
- GPT 3 demo of a search engine/answer engine using AI
- GPT3 demo of command line auto-complete from English to shell commands


And more. I've posted all the embedded tweets and videos on my site:
https://gregraiz.com/gpt-3-demo-and-e...

You can also follow me on twitter here:
https://www.twitter.com/graiz

The paper on Language Models are Few-Shot Learners is available to read:
 https://arxiv.org/abs/2005.14165

Caption author 英语爸爸
(Chinese (China))






https://youtu.be/G6Z_S6hs29s
https://youtu.be/cpWEXQkpBFQ
 https://youtu.be/tsuxlU5IwuA


OpenAI GPT-3: Beginners Tutorial



OpenAI has released GPT-3, a state-of-the-art language model made up of 175 billion parameters. In this video, I'll create a simple tutorial on how you can use OpenAI's API to use the GPT-3 model.

The previous OpenAI GPT model that is GPT-2 had 1.5 billion parameters and was the biggest model back then. GPT-3 can write poetry, translate text, chat convincingly, and answer abstract questions.

Link to Shreya's Repo :  https://github.com/shreyashankar/gpt3...

Link to the Notebook :  https://github.com/bhattbhavesh91/gpt...

Link to Request for API Access :  https://lnkd.in/eUTisGR

If you do have any questions with what we covered in this video then feel free to ask in the comment section below & I'll do my best to answer those.

If you enjoy these tutorials & would like to support them then the easiest way is to simply like the video & give it a thumbs up & also it's a huge help to share these videos with anyone who you think would find them useful.

Please consider clicking the SUBSCRIBE button to be notified for future videos & thank you all for watching.

You can find me on:

Blog - http://bhattbhavesh91.github.io

Twitter -  https://twitter.com/_bhaveshbhatt

GitHub - https://github.com/bhattbhavesh91

Medium -  https://medium.com/@bhattbhavesh91

#GPT3 #NLP



 
Read more: 

Will GPT-3's AI make writers obsolete? - without bullshit




Related posts:


Global AI collaboration to fight pandemic, revive economies

The future is AI technology




Developing AI specialists through collaboration

 

 

AI Superpowers: China, Silicon Valley, and the New World Order; Singapore tries its own path in clash