681 followers [0] 139 篇文章/周
[D] help me in my deep learning project

[D] discussion [P] project So here we are want to make coustom llm for depression cure(which we are going to feed different pdf of depression cures books ) + stable diffusion (image therapy)+audio (binural beats for healing) So any idea how can we create coustom llm ( also going to include tts & sst) in this chatbot. What tools and library we are...

Fri May 10, 2024 12:42
[P] Sleepy detection while driving via dashcam

I need help with this project idea. I am new to this any kind of info would help, and i would like to develop a model which detects driver is drowsy or not, from dashcam live feed. how do i tackle the problem, how should i approach it, is there a better and worst way to do or avoid. Any kind of info would be extremely helpful. thanks :) submitted...

Fri May 10, 2024 12:42
Generating outputs from last layer's hidden state values [D]

I manipulated the hidden state values obtained from the llama-2 model after feeding it a certain input, let's call it Input_1. Now, I want to examine the output (causal output) it produces from this. My hypothesis is that it should correspond to a different input, let's call it Input_2, which would yield a distinct output from the initial input. I got...

Fri May 10, 2024 09:42
[D] AI effects on job market in Web Development vs Deep Learning

Hi, I'm a lead data analyst trying to tweak my career direction. I've got some basic exposure to Web Development and Deep Learning. Which direction should I take, considering AI can disrupt future job market in either of them or both? submitted by /u/Sufficient-Result987 [link] [comments]

Fri May 10, 2024 09:42
[D] How to use RAG benchmarks in practice

I am working on a research projects which involves experimenting with RAGs. I want to run the models first to get an understanding of how the whole pipeline works. I found some datasets on HuggingFace (such as https://huggingface.co/datasets/explodinggradients/WikiEval). My understanding of RAGs is that I should be given a datastore, and then I perform...

Fri May 10, 2024 09:42
Same llm different results [D]

Did you guys ever feel that the same open source llm is giving slightly different answers on different playgrounds.. Like if you use llama 70 b on perplexity and groq notice the difference Can someone tell me why is it.. submitted by /u/IntentionNo5258 [link] [comments]

Fri May 10, 2024 09:42

打造你的专属新闻订阅源

准备好了吗?
开始 14 天试用,无需信用卡。

创建账号