Build In public Complete Automation overview 14-11
Bisma Majeed
10 days ago1 view
Transcript
00:00
All right, Happy Friday guys.
00:02
So on the 14th of November, I'm trying to record my daily blog as well as give a demo of the automation that I worked on this week.
00:11
So it starts from here in the Slack channel we have three.
00:16
So for this automation I'm considering three specific channels which will be used.
00:20
We have one for the daily logs, we have another for show and tell and one for the builder tips.
00:27
So each of these have dedicated Google Drive folders in the in Google Drive we have this builder pair tips, we have this daily logs and we have show and tell.
00:37
So what happens when I come in and let's say I like this video, I would jump in, I would use this and for example for understanding I want to show it live.
00:50
So when I go ahead and I would share a video link for Tella, whatever my TV log video is, I would simply pass it in here and on the back end, right.
01:01
So you would see that there is this built in public board that is like currently showing that it's working on it.
01:07
and once that it will be done processing the transcript and all the next steps it will be considered, it will update it with this check sign.
01:16
So right now it's working on it.
01:17
Let's see what the logs are on the backend.
01:20
All right, so here I am on the answer agent, render dashboard where this service is running and if I could see my current version that is being deployed.
01:34
I would like to see what logs are here, how the file is processed.
01:39
So right, so we have started at 6, 3.
01:42
So it you can see that the Slack messages received, it successfully fetched one of the links from it which is the Tella video link.
01:49
Then it is processing that video, sending this to so to you know, to use auto transcribe further we first need to download this video for.
02:00
So now it is sending it through another tool called YLTP something and then once that is you know, downloaded here we have the logs for those then so we cannot send the overall video as it is to Whisper API because there are some size limit limitations.
02:20
So what we are doing here is that we are extracting the audio from the video.
02:24
like in this step we are extracting audio from this one.
02:27
So once it's converted from MP4 to MP3 we can now we now have a workable file with a limited size and then that is being sent for OpenAI whisper for the transcription part to create a VDD file out of it.
02:41
let's say here we have the extracted the transcription then moving forward to use this transcription and sending it to our staging chief sidekick or in fact our document store.
02:53
So if I could show you how the document stores look like.
02:56
So here we I am in answer AI staging Here are the four different document stores.
03:00
I have created each of these separately as let's say number one as the daily logs.
03:05
Then we have show and tell builder tips and the content pack.
03:08
So, so for the daily logs let's say if this was added this is how it should look like.
03:13
So for this kind of you know logs this is how it would be.
03:17
And then we have detailed metadata where we can see who the speaker was.
03:22
the folder name, the exact file path.
03:25
then we have the date, we have the content type and source all of this.
03:30
so we can easily further query it.
03:32
one thing to notice here is that from the Slack message like if I go in and I would share a Tella link it would consider me as a speaker like for that specific video.
03:43
So when, when Diego comes in or Max comes in and share the link that this is how the speaker part is being updated for that specific video.
03:51
next we have so in cursor I have shared this detailed.
03:57
like if you want any details on how the project is set up this is very 3 n tread all the details in here.
04:04
so once that part is done we now have like added the transcription to the document store itself.
04:10
Let's go back to see what is the next step.
04:13
So once this API request is completed I am also doing a refreshing I'm also refreshing the document store so it is upset it correctly.
04:20
then we are creating the same you know from the same transcript.
04:24
I'm creating a parent linear ticket so we can like we can track everything like for this specific content piece, how you know you can track each of the content pieces that are shared out of it or that are completed out of it.
04:38
So under projects you would go in building public content engine.
04:42
This is the project and here you will be able to see all the tickets created for each of the videos.
04:48
So the next step on this is that for each of the separate ticket.
04:52
let me share.
04:54
Right.
04:54
So after the parent ticket is completed there are.
04:58
There is.
04:59
Yep.
04:59
So this needs to be discussed.
05:01
So we are currently hitting our usage limit for linear ticket numbers.
05:06
So maybe we need to have a conversation on this.
05:09
What if we, what are we planning on this one.
05:12
then we are also trying here to upload the video to Google Drive and the transcript as well.
05:19
So like the main plan is that all of this content will be added to Art engine first and then to the document stores.
05:27
So for example in this one we can see we have the video files uploaded as well as the transcription file.
05:34
All right.
05:36
The next step is we are finally sending all the transcript and the metadata to answer Agent.
05:42
Agent which is then I could show you this as well.
05:49
It's under built in public content Pack generator which is a simple LLM note.
05:54
this is a simple agent and it is giving me multiple content pieces out of it and in in social media captions then some hooks and all of that.
06:04
So this is how we are sending the entire data to answer region.
06:11
So the plan on this one is that we were having some issues with text file loader but.
06:15
Right.
06:15
And it is a known issue that it will maybe using plain text loaders is not efficient enough.
06:21
But for the time being we are using plain text loaders because there were issues with the other one.
06:25
but we'll change it afterwards.
06:27
all right.
06:29
Another thing that we are doing here is then generating a blog markdown out of it using the same pattern that we already have.
06:39
So if we.
06:43
So under the answer region, under the same repo under packages, contents and blocks, this is where I'm adding the you know these markdown format of blocks for the.
06:53
For all three videos.
06:55
yeah, these are the ones.
06:59
So let's say if.
07:01
Yeah, so it is using the same pattern.
07:05
We can see it in more detail like what needs to be updated in this one.
07:10
after that this part is done.
07:13
We now that we have created a pull request for the markdown we also got the linear ticket created.
07:20
And one another thing that I missed to share was all of the content pieces from that specific one will be added into your child issues for the same parent ticket.
07:30
So for example, let me explain.
07:32
Let's say if this was the initial parent ticket, if this was my main video, then all of these are created as separate child issues.
07:38
so we can track each of it and if this specific piece is posted on X, then we can just mark it as resolved.
07:45
This is just to have like a visual dashboard of where we are at with each of our content pieces for each of the videos that we are generating.
07:53
Another like the video is getting longer but I do want to share.
07:58
like today most of my time was spent on queuing this process and Then there was another issue in between that.
08:07
when I was going through the video downloading process and the transcription, what it was doing is it was only sending just the five second clip part transcription to answer agent Document Store.
08:17
And it was just like completely failing my entire automation because there was not enough context.
08:23
And so the way it was resolved was it was like lots of communication, lots of trial and error.
08:29
But one thing that I got to know was that when we are trying to you know use the URL from the Tella video, the link, it's somehow it is like it serves videos as a two item playlist.
08:43
And the way this automation was set up like at a point it was taking just the first clip which was a five second preview and it was just doing all the processing on this one and completely ignoring the real video.
08:55
So this just a small tweak like ignoring this 5 second preview from the televideo and instead using the entire clip helped me resolve the issue.
09:02
And because of that node JS was like killing the download limit and that was causing an entire failure of my overall service.
09:10
So yeah, so this is the current workflow and so yes, one important, another important thing.
09:17
So we can like it's already live, you guys can test it.
09:21
this Document store is all going in under Chief Sidekick.
09:28
So if I go into my enterprise admin I can go to chat post and if I'm using this organization default template.
09:37
So then, so this is all set up in here already and I have asked a couple questions and it's like what are the team.
09:47
The team discussed recently on Builder Tips and it fetched the correct information about what I recently discussed.
09:54
but also, but again there are like the services live from now on.
09:59
So I am trying to add the previous clips to the Document Store as well.
10:03
but while testing if you feel like any data is missing that's particularly because the information is not in the document store.
10:11
so from now onwards all data will be automatically synced to the Document Store.