Freedom, GEABSOLUTE POWERS CORRUPT ABSOLUTELY, General Election (GE15), Malaysia, Politics, polling Nov 19: Destroy Umno for the betterment of Malaysia, race, religion, Solidality, support Aliran for Justice
Chinese telecom giant Huawei Technologies is aiming to help build the foundation for China’s computing power and offering the world a “second option,” Huawei’s rotating chair Meng Wanzhou said on Wednesday, as the US and some of its Western allies are pushing for a complete tech decoupling.
At the Huawei Connect 2023, which showcased the company’s latest products and technologies, Meng vowed a number of new initiatives to bolster its computing base, as part of the company’s “All Intelligence” strategy.
Meng said that computing power is the core of Artificial Intelligence’s development, and Huawei will build a robust computing power foundation to support diverse requirements of various industries.
“We support every organization and industries to train their large models using information.” Meng said.
According to Meng, Huawei’s All Intelligence strategy aims to accelerate the “intelligence” of all industries, including connecting “everything” both virtual and physical, allowing model applications to benefit everyone, and offering computing power for every decision-making.
Intelligent transformation is the global tendency of manufacture development, which is crucial for the high-quality development of China’s manufacturing industry. Intelligence and its underlying computing power has become a focal point in the global technological competition.
Huawei, which has been a top target of the US’ technological crackdown, has been investing heavily in building its computing power, and its Large Language Model (LLM).
The LLM, which absorbs massive knowledge can be applied to multiple scenarios, lowered the threshold of AI development and application, according to Meng, and LLM bring possibility to solve large-scale industrial problems.
The computing power requirement of a LLM doubles every four months, according to Zhou Bin, CTO of Huawei’s Ascend Computing Business.
Huawei has continued to invest in research and development including in areas such as chemistry and material, physical and engineering, for decades, the combination of connecting and computing techniques contributed its advantages on intelligent products and system.
Meng said Huawei is also focused on personnel training through cooperation with colleges.
Huawei is working with 2,600 universities around the world to jointly build information and communication technology academies, which have trained 200,000 students annually. The “smart base” projects with 72 Chinese universities provided more than 1,600 courses for 500,000 students, according to media report.
“We invest about $3-5 billion annually in basic theory research.” Ren Zhengfei, Huawei’s founder, said during an event of International Collegiate Programming Contest in August, 2023.
How Scientists Predict Where Earthquakes Will Strike Next
The pair of earthquakes that hit Turkey and Syria this week left the region grappling with death and destruction. Despite the region being seismically active, this particular area hadn’t seen an earthquake of this size for decades. There are ways of knowing where the next big earthquakes will happen. —but not when. Scientists use knowledge of fault lines and historical data to make their predictions, but saving areas from mass casualties often relies on infrastructure policies. Building codes that prioritize strong buildings can save lives, but older structures remain vulnerable.
Across the globe, in California, the health impacts of electric vehicles are beginning to be seen. A study published this month finds that for every 20 EVs in a zip code, asthma-related visits to the emergency room drop by 3.2%. This is a striking number for a technology that’s just now becoming more commonplace. Joining Ira to talk about these stories and more is Umair Irfan, staff writer at Vox, based in Washington, D.C.
The past few months have seen a flurry of new, easy-to-use tools driven by artificial intelligence. It’s getting harder to tell what’s been created by a human: Programs like ChatGPT can construct believable written text, apps like Lensa can generate stylized avatars, while other developments can make pretty believable audio and video deep fakes.
Just this week, Google unveiled a new AI-driven chatbot called Bard, and Microsoft announced plans to incorporate ChatGPT within their search engine Bing. What is this new generation of AI good at, and where does it fall short?
Ira talks about the state of generative AI and takes listener calls with Dr. Melanie Mitchell, professor at the Santa Fe Institute and author of the book, Artificial Intelligence: A Guide for Thinking Humans. They are joined by Dr. Rumman Chowdhury, founder and CEO of Parity Consulting and responsible AI fellow at the Berkman Klein Center at Harvard University.
An AI technology has won praise for its ability to generate coherent stories, novels and even computer code. — AFP Relaxnews
An artificial intelligence (AI) technology made by a firm co-founded by billionaire Elon Musk has won praise for its ability to generate coherent stories, novels and even computer code but it remains blind to racism or sexism.
GPT-3, as Californian company OpenAI’s latest AI language model is known, is capable of completing a dialogue between two people, continuing a series of questions and answers or finishing a Shakespeare-style poem.
Start a sentence or text and it completes it for you, basing its response on the gigantic amount of information it has been fed.
This could come in useful for customer service, lawyers needing to sum up a legal precedent or for authors in need of inspiration.
While the technology is not new and has not yet learnt to reason like a human mind, OpenAI’s latest offering has won praise for the way its text resembles human writing.
“It is capable of generating very natural and plausible sentences,” says Bruce Delattre, an AI specialist at data consulting agency Artefact.
“It’s impressive to see how much the model is able to appropriate literary styles, even if there are repetitions.”
GPT-3 is also capable of finding precise responses to problems, such as the name of an illness from a description of symptoms.
It can solve some mathematical problems, express itself in several languages, or generate computer code for simple tasks that developers have to do but would happily avoid.
Delattre tells AFP it all works thanks to “statistical regularities”.
“The model knows that a particular word (or expression) is more or less likely to follow another.”
Billions of web pages
Amine Benhenni, scientific director at AI research and development firm Dataswati, tells AFP that “the big difference” compared to other systems is the size of the model.
GPT-3 has been fed the content of billions of web pages that are freely available online and all types of pieces of written work.
To give an idea of the magnitude of the project, the entire content of online encyclopaedia Wikipedia represents just 3% of all the information it has been given.
As such, it does not need to be retrained to perform tasks, as previous models did, when a new subject is introduced like medicine, law or the media.
Give it just a handful of examples of a task to do, such as completing a sentence, and it will then know how to complete any sentence it is given, no matter what the subject – a so-called “few-shot” language model.
“It’s amazingly powerful if you know how to prime the model well,” Shreya Shankar, an AI-specialised computer scientist, said on Twitter after having used GPT-3.
“It’s going to change the ML (machine learning) paradigm.”
Despite the hype, however, GPT-3 is only 10th on the SuperGLUE benchmark that measures the language-understanding of algorithms.
And that’s because some users demonstrated that when asked absurd questions, the model responds with senseless answers.
For instance, developer Kevin Lacker asked: “How many eyes does the sun have?”
“The sun has one eye,” it responded, Lacker wrote on his blog.
Fake reviews, fake news
Claude de Loupy, co-founder of French startup Syllabs that specialises in automated text creation, says the system lacks “pragmatism”.
Another major problem is that it replicates without a second thought any stereotype or hate speech fed during its training period, and can quickly become racist, anti-semitic or sexist.
As such, experts interviewed by AFP felt GPT-3 was not reliable enough for any sector needing to rely on machines, such as robo-journalism or customer services.
It can however be useful, like other similar models, for writing fake reviews or even mass-producing news stories for a disinformation campaign.
Concerned about “malicious applications of the technology”, OpenAI, which was co-founded in 2015 by Musk who has since left, and is financed by Microsoft among others, chose not to release the previous version of the model, GPT-2, in February 2019.
Originally a non-profit, OpenAI then became a “capped profit” company, which means investors get a capped return.
And in June, the firm changed tack and opened its GPT-3 model to commercial use, allowing for user feedback.
A step Claude de Loupy says could yield big profits.
There is “no doubt that the amount of text generated by AI is about to explode on the Web”. – AFP
GPT 3 Demo and Explanation - An AI revolution from OpenAI
Half Ideas - Startups and Entrepreneurship
4.89K subscribers
GPT 3 can write poetry, translate text, chat convincingly, and answer abstract questions. It's being used to code, design and much more. I'll give you a demo of some of the latest in this technology and some of how it works.
GPT3 comes from a company called OpenAI. OpenAI was founded by Elon Musk and Sam Altman (former president of Y-combinator the startup accelerator). OpenAI was founded with over a Billion invested to collaborate and create human-level AI for the benefit of society.
GPT 3 has been developed for a number of years. One of the early papers published was on Generative Pre-Training. The idea behind generative pre-training (GPT) is that while most AI's are trained on labeled data, there's a ton of data that isn't labeled. If you can evaluate the words and use them to train and tune the AI it can start to create predictions of future text on the unlabeled data. You repeat the process until predictions start to converge.
The newest GPT is able to do a ton. Some of the demos include: - GPT 3 demo of how to design a user interface using AI
- GPT 3 demo of how to code a react application using AI
- GPT 3 demo of an excel plug-in to fill data using AI
- GPT 3 demo of a search engine/answer engine using AI
- GPT3 demo of command line auto-complete from English to shell commands
OpenAI has released GPT-3, a state-of-the-art language model made up of 175 billion parameters. In this video, I'll create a simple tutorial on how you can use OpenAI's API to use the GPT-3 model.
The previous OpenAI GPT model that is GPT-2 had 1.5 billion parameters and was the biggest model back then. GPT-3 can write poetry, translate text, chat convincingly, and answer abstract questions.
If you do have any questions with what we covered in this video then feel free to ask in the comment section below & I'll do my best to answer those.
If you enjoy these tutorials & would like to support them then the easiest way is to simply like the video & give it a thumbs up & also it's a huge help to share these videos with anyone who you think would find them useful.
Please consider clicking the SUBSCRIBE button to be notified for future videos & thank you all for watching.