Home » Latest News » GPT-4 Demo -How to Watch Live Demo of Ai model

GPT-4 Demo -How to Watch Live Demo of Ai model


Andrew NG

| Updated on:

Earlier this week, OpenAI released its much-awaited language model GPT-4. The release of GPT-4 has taken the market by storm, due to its advanced capabilities and functionality which can be accessed through ChatGPT Plus.

Even though GPT-4 is currently not accessible in API (join the waitlist), the developers can still analyze and understand the functionality and features offered by GPT-4 API, through a live stream showcased by OpenAI’s president Greg Brockman. 

Therefore, in this article, we will list down everything related to GPT-4 Demo and how you can watch it live.  

GPT-4 Demo -How to Watch Live Demo of Ai model

GPT-4 Demo Live Stream on YouTube

OpenAI’s latest launch GPT-4, an advanced language, is now available for live streaming on the OpenAI Youtube account. The live stream was uploaded by OpenAI on 14th March, Tuesday at 1 pm PT/4 pm ET. Developers can access the live stream using this link GPT-4 Live Stream

The live stream was led by OpenAI’s co-founder and president, Greg Brockman. The demo was primarily aimed at the developers showcasing the features and capabilities of GPT-4 and how users can gain maximum advantage through this advanced language model. The demo also showcased a comparison between the GPT-4 and GPT-3, the current version of the language model used in ChatGPT. 

GPT 4 Live Streaming

OpenAI’s president and co-founder live-streamed the newly launched GPT-4 API demo. Addressing all the features (including image inputs), capabilities, and limitations of the multimodal language model. The live streaming was made available on OpenAI’s Youtube account which can be viewed through this link GPT-4 Live Stream

OpenAI has also invited users to share their queries and follow discussions on OpenAI’s Discord Channel

GPT4 on Discord

The live stream showcased OpenAI utilizing GPT-4 to build a Discord bot. OpenAI stated, the current model of ChatGPT, GPT-3.5 is unqualified to handle this task, especially since the bot is asked to manage inputs provided by texts as well as images. 

The new language model GPT-4 is capable of analyzing and describing image inputs effortlessly, GPT-4 was able to recognize and understand the handwritten image provided on Discord and delivered a context from the image instantly. Which GPT-3.5 is unable to do. 

This is a major improvement witnessed in GPT-4. Developers can now use a discord bot generated in GPT-4 Playground and OpenAI able to understand the photo of a handwritten mock-up website and transform it into an actual working website along with new content created for the web page is a major development. 

Live demonstration on OpenAI’s YouTube account.

On 14th March 2023, a live demonstration of GPT-4 was streamed on OpenAI’s Youtube account by OpenAI’s co-founder and president, Greg Brockman.

The demo focused on how developers can use GPT-4 API, showcasing its features and capabilities and a live comparison of GPT-4 with the current language model GPT-3. It also displayed GPT-4 to build a Discord bot. 

Including GPT-4 ability to understand and analyze a hand-written mockup of a website and transform those inputs into an actual working website.

In addition, Greg Brockman also displayed a live demonstration of GPT-4 and GPT-3.5 where he used the GPT-3.5 model to summarize a blog post but was unsuccessful. Meanwhile, the GPT-4 model could easily summarize the entire blog post in seconds. 


The demonstration of GPT-4 by Greg Brockman has definitely helped clear the doubts for developers on how to use the GPT-4 API. The live examples, features, limitations, and more were great ways to showcase the capabilities of the new multimodal language model. Once, GPT-4 API is made available, developers can keep coming back to the live stream to understand the usage of this model. 

Leave a Comment