Show HN: Trained Tiny Tales GPT(30M model)from scratch and deployed in $15
2 by kunalmishra78 | 0 comments on Hacker News.
For the last few weeks, I have been working on training an LLM from scratch and deploying it in production on Google Cloud Platform. Finally, I trained a 30 million parameter model on 1 billion tokens and deployed it as a web service. You can access the LLM using this site - https://ift.tt/n0Iq2xa The following steps were taken to build Tiny Tales GPT 1. Downloaded and preprocessed 8GB of dataset using multiprocessing library. 2. Tokenized the data using byte pair encoding to create 1 billion tokens sharded in different bin files. 3. Defined a training setup and trained the model on a small version of the LLaMA model architecture with 30 million parameters. 4. The training was done using Distributed Data-Parallel on two A-100 GPUs provided by JarvisLabs.ai (they are most cost-optimized) 5. After the training is done, an inference script is created to predict the tokens from the trained model given the input context vector. 6. Developed REST-based API service using Flask framework to interact with the inference service to the end user. 7. Finally used GCP's virtual machines, instance groups, load balancers, and DNS services to deploy the service on the internet.
2 by kunalmishra78 | 0 comments on Hacker News.
For the last few weeks, I have been working on training an LLM from scratch and deploying it in production on Google Cloud Platform. Finally, I trained a 30 million parameter model on 1 billion tokens and deployed it as a web service. You can access the LLM using this site - https://ift.tt/n0Iq2xa The following steps were taken to build Tiny Tales GPT 1. Downloaded and preprocessed 8GB of dataset using multiprocessing library. 2. Tokenized the data using byte pair encoding to create 1 billion tokens sharded in different bin files. 3. Defined a training setup and trained the model on a small version of the LLaMA model architecture with 30 million parameters. 4. The training was done using Distributed Data-Parallel on two A-100 GPUs provided by JarvisLabs.ai (they are most cost-optimized) 5. After the training is done, an inference script is created to predict the tokens from the trained model given the input context vector. 6. Developed REST-based API service using Flask framework to interact with the inference service to the end user. 7. Finally used GCP's virtual machines, instance groups, load balancers, and DNS services to deploy the service on the internet.