Connect with us

Hi, what are you looking for?

Reviews

Compress Large Language Models For Better Performance & SparseGPT

On Twitter recently, a user named Jay Hack (@mathemagic1an) posted an excerpt of an abstract that deals with how to prune out massive language models to achieve a better performance metric. The abstract was written by Austrian researchers Elias Frantar and Dan Alistarh. In the abstract, the researchers attempt to show how language models in the GPT family can be ‘pruned to 50% sparsity without any retraining.’

This is a rather significant breakthrough in terms of the usability of large language models, particularly in the GPT family. GPT stands for Generative Pretrained Transformers, but you may be familiar with the most popular of these systems in the current artificial intelligence landscape; ChatGPT by OpenAI. As outlined in the abstract, this incredible breakthrough was achieved with a new pruning method known as SparseGPT. This method was designed specifically to work on truly large language models like OPT-175B and BLOOM-176B, both of which are completely open-source. Both language models are also viewed as the largest of their kind that is currently available. 

Related: What are lawmakers and regulators doing about AI?

The wording of the abstract is relatively difficult to understand if you don’t have a background in this particular field of scientific inquiry, but I’ll try and break it down into simpler terms. Basically, all machine learning algorithms work with some kind of neural network. Early researchers and pioneers chose to call this a neural network due to the end goal of trying to simulate how the human brain, with all of its billions of neurons, operates. In a simple neural network, you always have an input, and out, and a lot of confusing mathematical bits in the middle in various layers of complex data handling systems. 

Each ‘neuron’ represents a value, and the neurons connect with other neurons in the system to form a network that can perform certain equations to determine output values. Within these hidden layers, there are certain parameters in place that transform input data into the eventual output. These parameters are basically like dials you can adjust to change the way your network operates, and we call these weights.

Now, with these massive language models, you need to take into account what kind of information these systems have to process. First, these systems need to recognize certain patterns that make up what we inherently recognize as letters, numbers, words, phrases and abstract thoughts. All of this complex computation is achieved through weights, and in large language models these weights can number in the innumerable. 

Basically, this new method, SparseGPT, shows how, in some cases, more than 100 billion of these individual weights can be ignored, while still producing largely accurate results. In even more basic terms, SparseGPT lets us minimize the number of computations and determinations that need to take place in these hidden layers of complex digital grey matter without altering the efficiency or the accuracy of the results we obtain. 

Advertisement. Scroll to continue reading.
AIAD

Game changing stuff, right? If you’re interested in artificial intelligence, machine learning, or neural networks, we have a host of other tantalizing news bits for you to read. If this wasn’t really your cup of tea, or rather, what you like to read while drinking a cup of tea, we have a bunch of other articles that may interest you. For instance, is it possible that Elon Musk’s Twitter verification crusade was all just clever marketing?

You May Also Like

Reviews

Google has implemented enhancements to the search suggestions displayed in Chrome’s address bar with the February 2024 Updates. A new read-only on-device model has...

Reviews

Amazon has brought in a new update that has stirred up some issues with Fire TV devices. The latest Fire OS version has caused...

Reviews

Microsoft has recognized an issue with the latest Windows Update affecting users of Windows 11. Some users may encounter difficulties while trying to install...

Reviews

Mozilla has launched Firefox 123.0, the latest stable version of its renowned open-source web browser, on February 20, 2024. The release also includes the...