I have been trying a few things and I am getting close to what I want.
My poller has two main loops. It is running on a Google compute vm from Google cloud.
The first one polls every 5 seconds for all the livestreams, then compare the broadcasting status from the previous result. If it has changed, depending on whether the status went from true to false or false to true, the livestream is added to a corresponding data structure.
The second loop runs every second. It takes the two data structures with livestreams that had changes with their broadcasting status and do a get request on their hls link. For the livestream that went from broadcasting false to true, it is waiting on a status 200 for the hls link. For the livestreams that stop broadcasting, it is waiting on a 404 status for the hls link.
This seems to be working well for detecting when the livestream is online and watchable. It is a bit slow, there is a delay of about 20 seconds from broadcast to status 200 on the hls, but it is manageable.
However, when a livestream is stopping, something is wrong. The broadcasting change from true to false is correctly detected, but the hls manifest does not change to 404 (not always, but it most of the time). It seems to be cached either in your servers or on the way to my vm. This is acceptable for detecting the end of the stream, because I could use only the broadcasting change to stop promoting the stream. The problem comes when the streamer tries to restart its stream, I am not about to detect when the video is watchable because the hls manifest is already 200.
Would you know more about this caching problem?
Do you have any solution for me?
Thanks for the help! It is really appreciated.