[D] discussion [P] project So here we are want to make coustom llm for depression cure(which we are going to feed different pdf of depression cures books ) + stable diffusion (image therapy)+audio (binural beats for healing) So any idea how can we create coustom llm ( also going to include tts & sst) in this chatbot. What tools and library we are...
I need help with this project idea. I am new to this any kind of info would help, and i would like to develop a model which detects driver is drowsy or not, from dashcam live feed. how do i tackle the problem, how should i approach it, is there a better and worst way to do or avoid. Any kind of info would be extremely helpful. thanks :) submitted...
I manipulated the hidden state values obtained from the llama-2 model after feeding it a certain input, let's call it Input_1. Now, I want to examine the output (causal output) it produces from this. My hypothesis is that it should correspond to a different input, let's call it Input_2, which would yield a distinct output from the initial input. I got...
Hi, I'm a lead data analyst trying to tweak my career direction. I've got some basic exposure to Web Development and Deep Learning. Which direction should I take, considering AI can disrupt future job market in either of them or both? submitted by /u/Sufficient-Result987 [link] [comments]
I am working on a research projects which involves experimenting with RAGs. I want to run the models first to get an understanding of how the whole pipeline works. I found some datasets on HuggingFace (such as https://huggingface.co/datasets/explodinggradients/WikiEval). My understanding of RAGs is that I should be given a datastore, and then I perform...
Did you guys ever feel that the same open source llm is giving slightly different answers on different playgrounds.. Like if you use llama 70 b on perplexity and groq notice the difference Can someone tell me why is it.. submitted by /u/IntentionNo5258 [link] [comments]