Переходьте в офлайн за допомогою програми Player FM !
I Fine-Tuned an LLM With My Telegram Chat History. Here’s What I Learned
Manage episode 423584197 series 3474148
This story was originally published on HackerNoon at: https://hackernoon.com/i-fine-tuned-an-llm-with-my-telegram-chat-history-heres-what-i-learned.
Pretending to be ourselves and our friends by training an LLM on Telegram messages
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #fine-tuning-llms, #ai-model-training, #training-ai-with-telegram, #personalized-ai-chatbot, #russian-language-ai, #mistral-7b-model, #lora-vs-full-fine-tuning, #hackernoon-top-story, and more.
This story was written by: @furiousteabag. Learn more about this writer by checking @furiousteabag's about page, and for more stories, please visit hackernoon.com.
I fine-tuned a language model using my Telegram messages to see if it could replicate my writing style and conversation patterns. I chose the Mistral 7B model for its performance and experimented with both LoRA (low-rank adaptation) and full fine-tuning approaches. I extracted all my Telegram messages, totaling 15,789 sessions over five years, and initially tested with the generic conversation fine-tuned Mistral model. For LoRA, the training on an RTX 3090 took 5.5 hours and cost $2, improving style mimicry but struggling with context and grammar. Full fine-tuning, using eight A100 GPUs, improved language performance and context retention but still had some errors. Overall, while the model captured conversational style and common topics well, it often lacked context in responses.
316 епізодів
Manage episode 423584197 series 3474148
This story was originally published on HackerNoon at: https://hackernoon.com/i-fine-tuned-an-llm-with-my-telegram-chat-history-heres-what-i-learned.
Pretending to be ourselves and our friends by training an LLM on Telegram messages
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #fine-tuning-llms, #ai-model-training, #training-ai-with-telegram, #personalized-ai-chatbot, #russian-language-ai, #mistral-7b-model, #lora-vs-full-fine-tuning, #hackernoon-top-story, and more.
This story was written by: @furiousteabag. Learn more about this writer by checking @furiousteabag's about page, and for more stories, please visit hackernoon.com.
I fine-tuned a language model using my Telegram messages to see if it could replicate my writing style and conversation patterns. I chose the Mistral 7B model for its performance and experimented with both LoRA (low-rank adaptation) and full fine-tuning approaches. I extracted all my Telegram messages, totaling 15,789 sessions over five years, and initially tested with the generic conversation fine-tuned Mistral model. For LoRA, the training on an RTX 3090 took 5.5 hours and cost $2, improving style mimicry but struggling with context and grammar. Full fine-tuning, using eight A100 GPUs, improved language performance and context retention but still had some errors. Overall, while the model captured conversational style and common topics well, it often lacked context in responses.
316 епізодів
Tất cả các tập
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.