eigenvalue a day ago | next |

I actually just finished making a service that does something similar, but it also transforms the transcripts to make them into polished written documents with complete sentences and nice markdown formatting. It also can generate interactive multiple choice quizzes. And it supports editing of the markdown files with revision history and one click hosting.

I'm still doing the last testing of the site, but might as well share it here since it's so relevant:

https://youtubetranscriptoptimizer.com/

There might still be a few rough edges, so keep that in mind!

Terretta 21 hours ago | root | parent | next |

The pricing is confusingly giving counts of videos of short length, rather than time per price.

The vodcasts that most need transcription are long form. After the "don't make me do math" pricing, you do have a table of minutes, up to 60, so for a typical, say, ContraPoints vodcast episode, you multiply by 3, and find out that could cost $30 to turn into the optimized transcript. (Which the creator might well pay for if they value their time, but viewers might not.)

eigenvalue 18 hours ago | root | parent |

Thanks for the feedback. I'll try to clarify the pricing table a bit better. And yes, this is targeting creators more. If it turns out that viewers are the better target market, I might pivot it a bit. And I'm considering adding a discount for longer videos.

hackernewds a day ago | root | parent | prev | next |

why limit this to YouTube? it should work on any body of text, is that right?

eigenvalue a day ago | root | parent |

Yes, I'm also working on another version that is document-centric. It's a bit of a different problem. In the case of YouTube video transcripts, we are dealing with raw speech utterances. There could be run-on sentences, filler words and other speech errors, etc. Basically, it's a very far cry from a polished written document. Thus we need to really transform the underlying content to first get the optimized document, which can differ quite significantly from the raw transcript. Then we use that optimized document to generate the quizzes.

In the case of a document only workflow, we generally want to stick to what's in the document very closely, and just extract the text accurately using OCR if needed (or extract it directly in case we don't need OCR) and then reformat it into nice looking markdown-- but without changing the actual content itself, just its appearance. When we've turned the original document into nice looking markdown, we can then use this to generate the quizzes and perhaps other related outputs (e.g, Anki cards, Powerpoint-type presentation slides, etc.).

Because of that fundamental difference in approach, I decided to separate it into two different apps. But I'm planning on using much of the same UI and other backend structure. The document centric app also seems like it has a broader base of potential users (like teachers-- there are a lot of teachers out there, way more than there are YouTube content creators). I started with the YouTube app because my wife makes YouTube videos about music theory and I wanted to make something that at least she would actually want to use!

owenpalmer 2 days ago | prev | next |

This approach really doesn't make sense to me. The model has to output the entire transcript token by token, instead of simply adding it to the context window...

A more interesting idea would be a browser extension that lets you open a chat window from within YouTube, letting you ask it questions about certain parts of the transcript with full context in the system prompt.

ofou 2 days ago | root | parent | prev | next |

For sure, that's an interesting idea, but potentially very costly (for longer videos). A plus side of this strategy is that the Transcription gets clean up a lot and also the math notation fix up too. So, it's just a cleaner text, well formatted for people who like to read videos instead of mindlessly watching a video.

We're at Emergent Mind are working on providing bits of a technical transcript to a model and then asking follow up questions. You can check it out here http://emergentmind.com if curious.

hombre_fatal 2 days ago | root | parent | prev |

Until I read other comments here, I assumed that's what they were doing since it bugged out on me and didn't regurgitate the transcript back to me yet still let me ask questions about it.

https://chatgpt.com/share/66e9f5ae-8d20-8000-b3a5-7c1ba928b8...

spuz a day ago | prev | next |

How is it supposed to work? When I open this, I just see a prompt that says "Get the full transcription of any Youtube video, fast. Studies suggest that reading leads to better retention of complex information compared to video watching. Only English videos currently."

I tried pasting the URL of a YouTube video and I get the message "I'm unable to access the video directly, as the tool needed for that is disabled. However, if you'd like, you can summarize the video or let me know how I can assist with it!"

two_handfuls a day ago | prev | next |

I get what this is doing, but calling it "chat with a transcript" is weird. Like, documents and videos don't chat. We chat with a bot who has seen the document/video.

Kiro a day ago | root | parent | prev |

You're way too late starting that fight. "Chat with [anything]" has been an established term for a long time now.

two_handfuls 10 hours ago | root | parent |

In the enthusiast community, I suppose. It's not too late to adopt clearer terminology- this will be important as those things try to reach mainstream users.

romseb 2 days ago | prev | next |

It does not work with long form conversations like podcasts.

"I was unable to retrieve the transcript for this video due to its large size."

ofou 2 days ago | root | parent | next |

Coming soon! Currently, it works for videos under one hour. This limitation is due to ChatGPT's context window when using Plugins. I don't know why since it should support 200k tokens... Alternatively, you can use https://textube.olivares.cl to get the full transcription for any video in English.

nomilk a day ago | prev | next |

I’d love this but from the yt home page and search results page. That would let me ask chatgpt if the video really contains the info its thumbnail/title suggest it does without having to leave the current browser tab.

I’ve done this by manually copy/pasting a yt transcript into chatgpt (and later streamlining it into a bash function), and it was quite effective, allowing me to dodge a couple of click bait time wasters. (videos that looked important but really were just fluffing up unimportant nonsense).

Workaccount2 2 days ago | prev | next |

I don't know if everyone has access to it (might just be yt premium), but many videos have an "ask gemini about this video" button, where you can directly ask questions about the video.

ofou 2 days ago | root | parent | next |

It might be a preview or something because I have YT premium and doesn't show up that anywhere. Can you share a video that works for that? Like this one.

https://www.youtube.com/watch?v=zjkBMFhNj_g

oefrha a day ago | root | parent | prev | next |

It’s really ironic that YouTube basically pushed videos to be at least ~ten minutes long through commercial incentives, then offers AI features to cut through that filler garbage.

Workaccount2 21 hours ago | root | parent | next |

While this is true, the thrust of what youtube was doing was to incentivize creation of videos that are 10+ minutes because they need to be 10+ minutes, not 10+ minutes because you are trying to game the system.

adzm 2 days ago | root | parent | prev |

It is a beta feature in YouTube premium and doesn't seem to be for all videos, but it has been extremely useful in my experience. You can even ask where in a video things are discussed etc.

andai 2 days ago | prev | next |

Very nice. I made a thing in Python which summarizes a YouTube transcript in bullet points. Never thought about asking it questions, that's a great idea!

I just run yt-dlp to fetch the transcript and shove it in the GPT prompt. (I think also have a few lines to remove the timestamps, although arguably those would be useful to keep.)

My prompt is "{transcript} Please summarize the above in bullet points"

The trick was splitting it up into overlapping chunks so it fits in the context size. (And then summarizing your summary because it ends up too long cause you had so many chunks!)

These days that's not so important, usually you can shove an entire book in! (Unless you're using a local model, which still have small context sizes, work pretty well for summarization.)

HPsquared 2 days ago | root | parent | prev |

If you're going as far as using yt-dlp, why not run the audio through Whisper?

andai 2 days ago | root | parent | next |

Interesting, I haven't used Whisper, is it cost effective? Seems to be about 36 cents per (hour long) video? How long does processing take?

kajecounterhack 2 days ago | root | parent | next |

You can run it locally, and it's really fast. But since YouTube transcription is really good, I don't see why you'd use Whisper and get a worse transcription (unless maybe it's on videos that Google did not transcribe for whatever reason).

gs17 2 days ago | root | parent |

> But since YouTube transcription is really good

Are you sure you're looking at automatic transcripts? YouTube transcripts are bizarrely low quality if they're not provided by the creators (I've actually used my Google Pixel's live transcription to make better captions occasionally).

I just checked a video my girlfriend uploaded a week ago and the auto-transcript was still pretty messy. I've used Whisper for the same task and it's significantly better.

ofou a day ago | root | parent |

Agreed. However, you can get great YT transcriptions using GPT-4o mini to clean them up.

HPsquared a day ago | root | parent | prev |

36 cents an hour is how much it costs to hire an entire GPU like an A4000. I can assure you Whisper runs much, much faster than 1x!

davidzweig 2 days ago | root | parent | prev |

The security against downloading audio from YouTube has been upped recently with 'PO tokens'.

Whisper is only a few tenths of a cent per hour transcribed if transcribing on your gpu though, at about 30x real-time on a 3080 etc. with batching.

iorrus 2 days ago | prev | next |

I've been using Voxscript [0] for a while, after comparing the two I think voxscript is better, gives longer more detailed summaries, TexTube just seems to give a very brief impersonal overview. Easy to try both and see which you prefer.

[0] https://chatgpt.com/g/g-g24EzkDta-voxscript

ofou 2 days ago | root | parent |

TexTube is not giving summaries but the actual transcripts. Plus, mine is way faster ;)

Compare the results:

TexTube: https://chatgpt.com/share/66e9f424-32c4-8009-b761-c8a8d6fbec... VoxScript: https://chatgpt.com/share/66e9f443-31d8-8009-b396-dba11b2f5b...

iorrus 2 days ago | root | parent |

Hmm it didn’t work that way for me, first I asked it to summarise a video, then I simply posted the link to the video assuming it would give the transcript, in both cases it summarised the transcript.

But if I start a new session and simply paste the link to the video it gives the transcript. I’m not sure an llm is the best solution to getting full transcripts.

jonwinstanley 2 days ago | prev | next |

What does it mean by chat with a transcript?

I.e. what are the kind of things I can ask and get value from?

ofou 2 days ago | root | parent | next |

First, I would say that reading is faster than watching. Therefore, it is more time-efficient to read a YouTube video, especially if it covers technical content or interesting ideas. Additionally, you can ask follow-up questions about the content, and since it's in an OAI conversation, you can leverage the "intelligence" of the model to help you understand the parts that you find difficult. Sometimes, I watch technical YouTube videos and wish I had a written version; so here it is.

This is an interesting example, it feels different than watching the ~12min video. https://chatgpt.com/share/66e9eaff-248c-8009-9761-d848d97881...

kylebenzle 2 days ago | root | parent | prev |

Nothing, it means nothing, like most of this "AI" hype nonsense.

They copy paste text transcripts into an Llm and have it generate more text based on its training and prompt data. You can't "chat" with a text document of course.

yreg 2 days ago | root | parent | next |

Chat with the document means chat about that document with an LLM who has “read” it.

It can be useful; it's not hype nonsense.

jonwinstanley 2 days ago | root | parent |

Ahh ok.

So rather than watch the video or read the transcript you just ask the one thing you want to know.

Could it take you to the moment in the video that is useful too?

yreg a day ago | root | parent |

You could ask it for a couple of verbatim sentences from the transcript that are most related to what you are interested in, then find the timestamp for that text. (There could be UI for this.)

Another solution would be to skip the LLM prompting part altogether and

1. break the transcript into short sections

2. create embeddings from them and remember the timestamps for each

3. embed your query (what are you interested in)

4. calculate the closest embedding in the transcript to your query

5. return the original timestamp

ofou a day ago | root | parent |

That's a good idea. However, I believe the challenging part lies in first reconstructing the short utterances into coherent, meaningful paragraphs.

Currently, with the API [1], you can retrieve a JSON with timestamps. The main issue, though, is how to parse the text effectively into meaningful sentences, and then add the timestamps at the beginning of the paragraph. WIP.

[1]: https://textube.olivares.cl/watch?v=9iqn1HhFJ6c&format=JSON

camus_absurd 2 days ago | root | parent | prev |

I’m not sure I follow. Can you explain ‘you can’t chat with a text document’ because you clearly can.

hombre_fatal 2 days ago | root | parent | next |

Is anyone even chomping at the bit to hear a pedant explain how "chatting with a text document" isn't the most precise way to phrase this concept that we all understand?

ipaddr a day ago | root | parent |

chatting with a bot about a text document.

chatting about a text document

Chatting with a text document implies it has AI or magical abilities.

You wouldn't say you are chatting with your dog if you are talking to your wife about your dog.

afro88 2 days ago | prev | next |

When I try it it just says "Not found"

ofou 2 days ago | root | parent |

Can you share the link?

afro88 a day ago | root | parent |

I clicked on one of the examples, which was "State of GPT by Andrej Karpathy"

ofou 21 hours ago | root | parent |

Sometimes, the model used by Plugins gets confused, especially when the transcript is too long. It might just load the content into memory as a response without saying much more. You can then engage in follow-up chat interactions. But now I just tried again the link and it seems to work. Sometimes you have to try a bunch of times, or explicitly ask for the transcript if not shown.

https://chatgpt.com/share/66eadbad-1d3c-8009-91f0-abe3cf4d36...

tsunamifury 2 days ago | prev | next |

allofus.ai already congregates all of the thinking of any creator on YouTube into a single mental model and allows you to interact with their synthetic self.

lupusreal 2 days ago | prev | next |

Seems like fishing with hand grenades to me. I just download the subs and grep that.

mdp2021 2 days ago | root | parent | next |

Even just experience with `man`-pages, "/<term>", show that it is a suboptimal strategy that leaves querying an understanding reader engine to be desired.

lupusreal 2 days ago | root | parent |

Really? I generally have a good experience with searching manpages. My big grip with those is the man program itself.

mdp2021 2 hours ago | root | parent |

Mine is that directly asking a question ("How to...") would be much faster than finding the information through grep or highlight aided skimming. It would be just more efficient.

Also since in order to find a feature through a literal string you first have to guess it... But language is inherently fuzzier, so literal searches are in this purpose weaker than an interface dealing with the fuzzy aspect of expression.

studymonkey a day ago | prev |

Awesome work, OP! I really believe we’ll soon be able to get a full four-year education just from YouTube. The challenge right now is sifting through the infotainment that the algorithms tend to push.

This is actually what inspired us to create Lectura: https://lectura.xyz/

We’ve added features that promote curiosity and deeper learning, like ELI5 explanations, suggested queries based on transcripts, quizzes to track retention, and more.

If you’re interested in joining us to build out the platform, feel free to reach out at neil at lectura dot xyz