GET Video high response latency in the US

I’m experiencing high latency to get a video URL. We’re using the private feature so we must fetch a new URL every time we view it. I’m observing on average the response time to get a video asset with the new token is ~3s in the US. Is there anything I can do on my end to speed this up, like specify a region?

The site has the perception of feeling slow since we can’t render a video until we get this URL. One suggestion I’d like to make is passing if the video is playable to the getVideo endpoint instead of having to call getStatus.

Edit:

Now that I’m looking it seems all endpoints have a base response of ~1.5-2s. Queries are being made from us-central1 in Google datacenters.

I also forgot to mention this is requests done to the Sandbox. Perhaps production is faster?

1 Like

Thanks for the update - The devs are in France, so we may not get a reply until tomorrow morning.

Did a little more digging and it seems like the requests are going to Canada and the raw latency to that is not bad. Seems to be averaging ~90ms. I’m assuming the lookup to the video is causing the latency to be high. Are there any plans to speed it up? Would love to get the URL quickly so we can load the player asap.

Thanks, Jeff, I suspected that the look-up was going to a destination further away. Is there a way to batch create tokens? Creating the token one by one using the getVideo request would take too long with an average latency of 2.5 seconds. It does complicate the setup quite a bit but will likely be worth it. I think I’d need to detect when tokens reach a certain threshold and then generate more ahead of time.

@michael5

Jeff and I have been chatting about this just earlier today. This is also on our roadmap, and we are working out the specifics on how it should work with our development team:

How many tokens at a time? 100? 1k?
if I generate 1k tokens - do they expire? Can they be revoked?

Anyway - we’re on it - but I don’t have an ETA for you.

Doug

1 Like

I think what I can do for now is make multiple getVideo requests asynchronously. I’m actually doing this to call getVideo and getStatus by calling both in parallel to reduce total latency. I was actually thinking it might be useful to provide some of the status payload in getVideo. I’m simply checking status to see if the video is playable.

Am I going to hit any throttles doing this?

You should be fine on the throttle. But, you might find it easier to use our webhook to get the encoding status PUSHed to your server. You’ll get an update when each version of the video has finished encoding and is ready for playback.

Based on our traffic I’m thinking about requesting 50 tokens at a time per video, which I would call in parallel. Do you anticipate any issues with this? Typically what happens with our classes is that many kids will take the same class together and all watch the video so it will likely be a burst of tokens needed.

Hi Michael! I am a PM on the team.

So if you pre-generate these tokens, this shouldn’t be a problem.

But overall, making the token generation process faster for you and for others is indeed on our roadmap. If you want to have a chat with the team, I am happy to join to learn more from your use case and present to you the different solutions we are exploring. This is something we want to start to address soon, most probably this quarter.

1 Like

Hi Alexandre,

I would love to have a chat. I think we’re keen on making this product work for us and so far the biggest criticism from the team has been in the speed of loading videos. We’re doing some final testing and will soon be launching with high latency, unfortunately. I’ll be following up with a method to generate tokens to cut that time in half but it does complicate the flow quite a bit. Ideally, it would be nice if the video data was replicated throughout different regions or cached and then we could confidently make calls to the API to get a single token.