BERT stands for Bidirectional Encoder Representations from Transformations. It is an open-source, transformer-based machine learning technique for natural language processing (NLP) released by Google. It was developed by Jacob Devlin and his Google colleagues in 2018. BERT is pre-trained on two tasks: Masked Language Modeling and Next Sentence Prediction.