Transformers are at the heart of modern Large Language Models (LLMs). They were a major innovation in machine learning from a team of Google researchers in 2017 that allowed AI to work with much larger datasets and be more generally effective.
Here's a description of transformers from an excellent blog post by Dale Markowitz:
A Transformer is a type of neural network architecture. To recap, neural nets are a very effective type of model for analyzing complex data types like images, videos, audio, and text. But there are different types of neural networks optimized for different types of data. For example, for analyzing images, we’ll typically use convolutional neural networks or “CNNs.” Vaguely, they mimic the way the human brain processes visual information.
She also did a video if that's more your speed: