Clicky

Home » Latest News » OpenAI GPT-4 Passes Bar Exam SAT with scoring 90th percentile – Except For These

OpenAI GPT-4 Passes Bar Exam SAT with scoring 90th percentile – Except For These

By

Ankita

| Updated on:

OpenAI’s much-awaited language model GPT-4 was launched recently on ChatGPT Plus and API (Waitlist). Ever since the multimodal model was released, people have been talking about its ability to score terrific in several examinations such as the Uniform Bar exam, SATs, GRE, and more. 

Apparently, OpenAI GPT-4 has passed the Bar Exam and SAT with the 90th percentile, which is quite impressive for an AI language model. In addition, GPT-4 also performed various tests from high school to college level to showcase its capabilities.

 The tests performed by GPT-4 included MCQs (Multiple choice questions) and Free-response questions. Using the standard methodology, GPT-4 performed all the tests.

In this article, we will look into examinations performed by GPT-4 and how the multimodal language model performed. So, let’s start. 

OpenAI GPT-4 Passes Bar Exam SAT with scoring 90th percentile - Except For These

GPT-4 – What, like law school, is hard?


OpenAI’s new language model GPT-4 was easily able to pass the LSATs (Law School Admission Test) by scoring 168 and an estimated percentile of 88th on the test. Apart from this, GPT-4 was able to score 298/400, with an estimated percentile of 90th in the Uniform Bar Exam (MBE+MEE+MPT). 

Meanwhile the previous language model of OpenAI, GPT-3.5 was able to score 213/400 in the uniform Bar Exam with an estimated percentile of 10th and 149 in LSAT with a percentile of 40th. 

For GPT-4 College admissions tests were a piece of cake


College admission tests were taken by GPT-4 to showcase its capabilities in solving complex queries such as Captcha. The multimodal language model took SATs in math and reading/writing, including three sections of the GRE (Graduate Record Examination). 

In the GRE Quantitative test, GPT-4 was able to score a percentile of 88th and 99th on the GRE’s verbal examination. 

However, GPT-4 did well on the GREs writing test, where it scored 54th percentile, which is better than the scores obtained by GPT-3.5. OpenAI’s prior model GPT-3.5 wasn’t able to score more than 63 percentile in the tests taken by the language model. 

In addition to college admission tests, GPT-4 also took tests for high school examinations, including all the AP (Advanced Placement) such as Biology, Chemistry, English literature and composition, Calculus BC, Art History, Psychology, and more.

 It aced AP Biology and Art History by scoring a percentile of 86th -100th in AP Art history and 84th-100th percentile in AP Art History. 

In AP Calculus BC, GPT-4 was not able to do that great as it scored a percentile of 43th-59th. GPT-4 scored 44th percentile in AP English Language and 14th-44th in English literature and composition. 

Although GPT-4 wasn’t able to perform amazingly in all the high school examinations it still performed decently in most of them by scoring a percentile of 86th-100th, which is quite impressive. 

GPT-4 has some coding work to do


GPT-4 still needs to work on its coding abilities considering the fact GPT-4 was able to solve easy levels of Leetcode by solving 31 of 41 problems in the test.

The language model did struggle when it reached medium and hard levels by solving 21/80 and 3/45, respectively.

It’s rating in Codeforces that hosts programming events was 392, which categorizes it in the Newbie level of anything lower than 1199. 

Although, we did witness GPT-4 capabilities in the live stream demonstration for developers, showcasing GPT-4 capabilities to write python. 

However, GPT-4 still requires a few manual tweaking to generate a proper parameter, which might be the reason behind these scores in coding. 

OpenAI’s GPT-4 Can Analyze Visual Images to Pass Bar Exam

The major factor that separates GPT-4 from other models developed by OpenAI is its ability to analyze and understand Visual Inputs. GPT-4 can process visual inputs like images and can process up to 25,000 words. 

With its multimodal capabilities, GPT-4 can easily read, summarize, translate, and generate text answers in a human-like way.

 Allowing GPT-4 to exhibit “human-level performances” to pass bar exams SATs, and more effortlessly by scoring expertly in the percentile.

 In addition to the bar exam, vision inputs can also help generate automated captions, a recipe of chocolate cakes, by simply providing an image of flour and eggs. 

Since GPT-4 can understand vision input, it can understand questions asked based on images and provide a proper step-by-step answer in a human-like manner to the provided image. 

What Other Exams Can GPT-4 Pass?


GPT-4 has set some benchmarks in the professional and academic examinations. The examination taken by GPT-4 is GRE (Graduate Record Examination), LSAT, SATs, AP (all languages), Intro Sommelier, Certified Sommelier, Advanced Sommelier, Leetcode (Easy, Medium, Hard levels), Medical knowledge Self-Assessment Program, USNCO Local Section Exam and more. 

Leave a Comment