Add Past-Date On-Screen Timestamp to Video?
So, I really screwed up on a video project: I had to record a 30+-day long event continuously, and was supposed to put a time/date stamp on each video segment so that it could be easily identified as having appertained to a particular part of the event, later. I dated the filenames, but forgot to put an on-screen timestamp on the video!
I know that ffmpeg can do it as it's recording from a live source or transcoding an existing one, but doing it that way would require me to set my system clock back by over a month and then let it transcode for another 30+ days, in order to get a continuous timestamp on each video. Is there any way to insert a past-date time-stamp that counts seconds, etc., without doing it the REALLY slow way? Note: it doesn't have to stand up to scrutiny regarding being "doctored", so long as the time/date is clearly visible and counts continuously. Thanks for your help! --Dane |
umm, at the risk of helping with an assignment, homework or such...and in the hope that I can help you save your neck (and possibly your job) with this...
I'd go about making a blank video that consists of only transparent frames where there is a timestamp visible (dunnow HOW to generate that timestamp...yet, but I guess Python to be your hugging friend here) and effectively merge the two videos in a video editor, where the "timestamp video" lies om top of the original one...export as one video... And, I really hope I'm not getting into crap here :) - next time...dont screw up, okay? ;) Luck Thor Edit - a link on making a montage... Edit - I'd create a Python loop to create successicely named images and store them in one folder, then import these in a video editor...the editor should figure out that these are a sequence and (hopefully) invite you to import these as a sequence in a video...from there on it shuold be a snap... |
I have never done it before, but I would try it like this.
Subtitle files (.srt) consist of a timestamp and a text. In your case you would use a timestamp and a textual representation of the date/time. Then run mencoder to add the subtitles to the video. There are many manuals, this is just one: http://www.linuxandlife.com/2012/11/...to-videos.html jlinkels |
Thanks for your help, everyone! I'm going to give the subtitle suggestion a try and post back, one way or the other.
Thor 2.0, let me put your mind at ease: it' a volunteer project to document a month-long concert. No school, job, or money involved. :-) |
Quote:
Give the subtitle option a whack...I never thought of that, but it is by far the better option...I even clicked the "Did you find this post helpful?" link for that... Thumbs up on the project :) Thor |
Thanks for your enthusiasm! It's been a wild ride. The video box *I* built timestamped everything (which was only operational for the last two weeks or so), but I should have been paying attention to the other setups and checked to see if they were working properly, before they went down, completely. Now I have over 4TB of video to fix. :-p Live and learn...
A BASH script is immanent, and will be posted here when everything is working. :-) |
Quote:
Quote:
|
Ha! Yeah, seriously. And they want it done by Wednesday... XD
Some other fine folks organized the whole thing (and did a darned good job of it, despite a few hiccups); I'm basically in charge of fixing things that need fixing, at least where computers are concerned. Yes, this will go on my resume. :-) |
Alright, so I got the script almost done, got the data in-hand, and was about to finish the script and start the process when I decided it would be wise to just do a simple avconv conversion on one file and see how long it would take. Doing a bit of math to get a ballpark figure, it turns out that just timestamping the footage from one source for each time period (instead of the full 4TB+ of videos) was going to take 62 days of continuous running, or closer to 6 months if I didn't want to entirely give up the use of my primary computer until it was finished. The data needs to be collated and submitted within a matter of weeks, so the project organizers have decided that we did our best and to just turn un-timestamped video for the time periods that lack it...and hope for the best. Either way, it will make a good story. (I don't think we'll be using setups built by the guys who failed to timestamp their stuff, in the future.)
Here's the skeleton-of-a-script that I came up with for creating a subtitle file and applying it automatically. Obviously, it's not complete, but I hope someone finds it useful. Please note that the various parts of the script haven't been tested, and may need re-writing. --Dane Code:
#!/bin/bash |
...not wanting to give up and calling to arms...
What if...we use the Python Imaging Lib? Create an image and use PIL's text method? This should create an image with a timestamp to be saved/added to a file/video... Any Pytonians here? Thor Edit Python def found Code:
def pil_image(request): Giving these generated images a censecutive name (img1.png, img2.png, ...) and importing the first one in OpenShot, should make OpenShot invite to add these as a censecutive list of images... Someone else SHOULD have a neater option, I'm sure, this is Linux, geng, we dont give up...not that fast Edit - FFMPEG to combine the overlay video? Edit - similar challenge? |
That solution looks like it would work. I was using avconv, which is a fork of ffmpeg designed to have better syntax. The problem is that in order to mess with the video at all, we have to decode it, manipulate it, and re-encode it. That's what's taking so long...which makes sense when I think about it: it took 30+ days to encode the video (on the fly), so it makes sense to take 30+ days to decode and 30+ more to encode. It's just really CPU hungry.
|
hmm, perhaps (loose thinking here, sorry) there is an other way around...dotn decode/encode...but overlay a layer instead...
So, if the Python can create a strip of transparent frames with the required timestamp on it...have ot export these as a video in one go. then import the two videos while overlaying the transparent one over the original one... Just a loose thought...dont kill me... :) |
Hehe, I won't kill you. :-)
In order to get a frame out of a video (via ffmpeg/avconv, VLC, or any program), it must first be decoded. Then, one can can overlay, modify, or mangle that frame. To make it a reasonable size, again, it must then re-encode it. To my knowledge, there's simply no way to modify a video without first decoding it, unless that video happens to be nothing more than a series of .bmp files (which don't use any compression or encoding)--which would be absurdly huge (and simply isn't done, anymore). The "long answer" of how video encoding (and decoding) works is something like this: 1) A raw image is turned into a lossy, compressed image, such as a jpeg. This is done (more-or-less) by taking one line of the image's pixels, and creating a list of how each line, after that, is different from the previous line. This list is then compressed further by aliasing the patterns of 1s and 0s, much like how ZIP or other compression does it. There may be a few additional lines that are taken directly from the original image, and derived from for subsequent lines, in order to help preserve data integrity. 2) Once there are a whole lot of such images in a sequence, the same process is done in 2D: each frame (individual image) is turned into an list explaining how it's different from the frame before it. That list is, again, compressed, as before. There are a few "key frames" added, periodically, which are verbatim images, in order to help improve data integrity. 3) All of this is stuck into a "container" that holds metadata about how many total frames there are, how long the video is, and so on. Such containers lend themselves to certain filename extensions, such as .avi, .mpg, .flv, and so on. They might all contain identical data, but could tend to utilize different compression and listing methods, and contain different/more/less metadata. Some containers, such as .mov, tend to favor large, high quality files, while others, such as .flv, tend to favor the opposite. 4) To decode one of these files, you have to have the "key" (codec), to "unlock" how that file needs to be translated from a bunch of numbers (and a very small number of images) into something that makes sense as a video. That key is used to mathematically reverse-engineer the foregoing. This is extremely CPU-intensive, since it involves a huge amount of calculation in order to mathematically derive each pixel, each line, and each frame from the one before it. If something is missing, or a calculation goes wrong, the data will become corrupt, and the video will look mighty odd, or simply not play. If it were just a stream of bitmap images (no compression, no aliasing, no changes list, etc.), it would simply skip a frame...but .bmps are MASSIVE, and videos made with them would pose a challenge even for modern computers and fiber Internet connections. 5) In order to manipulate a single frame, all that math must first be done for everything leading up to that frame, and every frame after it must be recalculated, accordingly. In other words, it's a lot like the "butterfly effect": if you change one little thing, everything after it changes, as well. Also, you must first justify why and how butterflies came into existence, and write an in-depth mathematical proof on the topic. So, in a nutshell, it takes about as long to edit even a single pixel of a video as it took to create that video, in the first place. (I wish I'd realized that at the start of this project...but I was too busy freaking out about the screw-up.) On the upside, these videos are fairly resilient against data loss, and are small enough to fit onto a desktop PC. It's a win, except when you have to futz around with butterflies. :-D Thanks for all your suggestions, Thor 2.0, even if some of them might not work. It's good to be a sounding-board. |
All times are GMT -5. The time now is 01:43 AM. |