Friday, August 29, 2008

Saya-VE 1st dev meeting (29/Aug/2008) Summary

Saya-VE 1st developer meeting (29/Aug/2008) Summary of activities.

CHANGES IN THE TEAM:

* Developers who were silently kicked out (suprise! This wasn't
mentioned in the meeting) due to lack of activity and reporting :
Nopalin, Wireshark. Note to Nopalin: You earned points by developing
part of the UI, you have a lot of chances of being accepted again.
Please message me.

* Developers who left: C.J. Barker.

* Developers who joined: Rigoberto C., Javier Galicia

OFFICIAL STATEMENTS:

* Official communications channel for the devs are this group (
http://groups.google.com/group/saya-dev/ )
and gmail chat (aka GTalk, Jabber). For this, all members are required
to get a GMail account and enable their chat in their gmail page.

* The project HQ is located at developer.berlios.de, project's name:
"saya". Here we'll deal with bugs, features, tasks, and the SVN
repository is located there.

* Sourceforge will be used ONLY for the website ( http://sayavideoeditor.sourceforge.net/
) and for releasing binaries / docs.

(oops, forgot to say in the meeting: Devs must get accounts both at
Berlios and Sourceforge)

* Meetings like this one will take place every last Friday of the
month, at 8PM CDT (that's -0600 with daylight savings). The calendar
can be seen at
http://www.google.com/calendar/embed?src=sbij7s3h23o0bhrrt4kmeppisk%4...

* Members will be given write-access to the calendar as they prove
their worth.

* New members must prove their worth by submitting one or two patches
before being given SVN access.

* If a dev won't be available for some time (i.e. vacations), he MUST
post it on this group (there's a specific thread for it).

* A "status with progress bars" page will be added to the website.

* Links with research info will be given to me so I can post them on
the website, under the "research" page.

TASKS GIVEN:

* Jeff is the Vegas expert. Ask him anything about UI design. He'll
also design the progress report with colored bars, which I'll post on
the website as soon as it's given to me.

* Bertrand will work on the CODEC module (GStreamer)

* I (rick) will work on the CORE and RENDERER module. The effects /
rendering part is still pending. If anyone wants to give me a hand
with the thread classes implementation, he's welcome.

* Javier Galicia will work on the Timeline. If anyone knows wxWidgets,
give him a hand.

* Rigoberto C. Will work on the playback controls for the preview
window.

NEXT MEETING WILL TAKE PLACE ON:

Date: September 26, same hour (8PM CDT, that is -0600 GDT).

Server: irc.freenode.net (make sure you register your nickname by /msg
NickServ register ).

Channel: #saya-dev

See the calendar for details.

Note: Remember, this meeting will be devs-ONLY. Foreigners will be kicked out and thrown to the dogs :P

First Devs Meeting a huge success!

Finally I got to get all the team members online. Unfortunately, some devs did not attend and didn't even report themselves. They'll be removed from the project ipso-facto (sorry, gave enough warnings and this isn't a treehouse club).

Another bad news: CJ Barker had to leave the project, his schedule became very tight all of a sudden, and this will be a long-term thing. We'll miss you.

About the meeting:

We talked about ourselves (brief intro), project expectations, official communication channels and how to organize ourselves. I also set up tasks to do. Expect a "Progress bar" to appear in the webpage soon. The meeting log will appear on our private google group.

I'll keep you updated.

Saturday, August 16, 2008

Vacations, and a member is back! Kinda

Hello everyone! Posting from the beautiful lake landscape of Guadalajara. The air's clean in here! :P

I got an unexpected message from one of the more "quiet" members of the team. Due to circumstances above his control, he completely could not login for almost 2 months!

(I'm telling you, this project is cursed! Hard drives crashing, people getting fired and/or having car accidents, is this some kind of conspiracy?)

But everything's going smooth. That other dev is back, and I just found a new programmer from Mexico (thanks OHLOH.NET) very eager to join the team!

So, we might be late on the schedule, but this project is not turning back!

I'll keep in touch.

Tuesday, August 12, 2008

Taking a one-week vacation this friday.

This Friday I'll go visit an old internet friend. I'll also intall Linux on his PC ;-)
So I'll be out for a week, and go back this August 25.

*sigh* 4 months and the project has been very slow in progressing :(

Anyway - If you don't hear from me the next month it means a bus hit me or a plane crashed on me or something. Please pray for my safe return and everything :)

When I return, we'll arrange a devs-only meeting on irc. Stay tuned.

Playback framework, high resolution timers.

I've advanced in the playback framework - the core of our editor. I've designed the AVController class, and I'm in the playback part - the part where you move data from the input to the output and keep the video and audio in sync.

For that I had to program a high-resolution (millisecond precision) function. Basing myself on the SDL's API, I build the syGetTicks() function - it gives you the number of milliseconds that have passed since the program was started. Unfortunately the stupid Windows API didn't have a Unix time-compatible function, so I had to break my head trying to make it work.

The Windows GetSystemTimeAsFiletime returns a 64-bit integer (well, 2 32-bit integers actually) that gives you the number of 100-nanosecond units (Whiskey Tango Foxtrot?!) since 1600. Wha? How am I supposed to convert that?

Well, easy. You just divide it by 10,000,000. And how to do that with 32 bit math?

Easy. Let's use some algebra.

(A * 2^32 + B ) / C = (B/C) + (A*2^32 / C )

The low part of the division is taken care of. And the second, is just as simple:

(A * 2^32) / 10^7 = A * (2^32/10^7) = A * 429.4967296

What do you think? We only have to multiply the high part by a floating point number and we'll get our result. However... we don't want to use floating point math in a high-resolution timing routine. So Instead, we'll do this:

A * 429.4967296 = (A*429) + (A*0.496) + (A*0.0007296)

Luckily, these numbers have fractional equivalents.

A * 429.4967296 = (A*429) + ((A*62)/125) + ((A*57)/78125).

Ta-da! So the final result is:

result = (low / 10000000) + ((hi*57)/78125) + ((hi*62)/125) + (hi* 429);

And we have the WINNT 32-bit equivalent for obtaining the number of seconds since... something.

If at startup we obtain an initial counter, for subsequent calls we only need to substract that number and we'll obtain the number of seconds that have ellapsed since we turned on our PC.

To obtain the milliseconds for the ticks, it was easier. Windows has a GetTickCount() function, but we can't use it since it based itself on the number of milliseconds since midnight. So we just obtain the modulo 1000 and stay with the milliseconds part.

Here are the final functions. The sy prefix is for "saya". Note that the Windows part hasn't been tested yet :P
If you want the final version, please check the Saya-VE source code (SVN) at
http://developer.berlios.de/projects/saya/


/**************************************************************
* Cross-platform High resolution timer functions.
* Copyright: Ricardo Garcia
* Website: http://sayavideoeditor.sourceforge.net/
* License: WxWindows License
**************************************************************/
unsigned long syGetTime();

unsigned long sySecondsAtInit = syGetTime();

unsigned long syGetTime() {
unsigned long result;
#ifdef __WIN32__
FILETIME ft;
GetSystemTimeAsFiletime(&ft);
unsigned long low = ft.dwLowDateTime;
/* We spare the highest 16 bits;
we don't want to overflow the calculation. */
unsigned long hi = ft.dwHighDateTime & 0x0ffff;
result = (low / 10000000) +
((hi*57)/78125) +
((hi*62)/125) +
(hi* 429);
#else
struct timeval mytime;
gettimeofday(&mytime, NULL);
result = (unsigned long)(mytime.tv_sec);
#endif
return result;
}

unsigned long syGetTicks() {
unsigned long result;
#ifdef __WIN32__
result = (syGetTime() - sySecondsAtInit) +
(GetTickCount() % 1000);
#else
struct timeval mytime;
gettimeofday(&mytime, NULL);
result = (unsigned long)(mytime.tv_sec - sySecondsAtInit)*1000;
result += (((unsigned long)(mytime.tv_usec)) / 1000);
#endif
return result;
}

Thursday, August 7, 2008

How to implement the renderers? Draft 1.

Actually this is more like a brainstorm, but bear with me :)

So far, we have been able to make a workable implementation of VideoOutputDevice. It has the following members:


class VideoOutputDevice : public syAborter {
public:
VideoOutputDevice(); // Constructor
bool Init(); // Initialices the output device
bool IsOk(); // Is the device OK?
bool IsPlaying(); // Is the device currently being transmitted data?
void ShutDown(); // Can only be called from the main thread!
VideoColorFormat GetColorFormat();
unsigned int GetWidth();
unsigned int GetHeight();
bool ChangeSize(unsigned int newwidth,unsigned int newheight);
// Can only be called from the main thread!

void LoadVideoData(syBitmap* bitmap);
virtual bool MustAbort();
virtual ~VideoOutputDevice(); // Destructor
protected:
// ...
private:
// ...
};

The renderer must invoke VideoOutputDevice::Init on playback start and VideoOutputDevice::Shutdown
on playback end; he same for AudioOutputDevice::Init and AudioOutputDevice::ShutDown.
Additionally, it must call VideoOutputDevice::LoadVideoData regularly (in case of playback) or for every frame
(in case of encoding). Therefore, it requires a way to know the input's framerate. Also it needs to know the
input's audio frequency.

It requires to be multithreaded so that the framerate doesn't depend on the main thread's GUI being blocked
or something.

Let's assume that it's VidProject which tells the renderer what the framerate is.

So we have:

void Renderer::Init(VideoInputDevice* videoin,AudioInputDevice* audioin,
VideoOutputDevice* videoout,AudioOutputDevice* audioout);


With this we mean we're gonna need new classes for input: VideoInputDevice and AudioInputDevice.

bool Renderer::SetVideoFramerate(float framerate);


And now, onto the playback functions:

void Renderer::Play(float speed = 1.0,bool muted = false);
void Renderer::Pause();
void Renderer::Stop();
void Renderer::Seek(unsigned long time); // Time in milliseconds to seek to


All that's fine, but what happens when we want to display a still frame? We don't know what video output device we
have - a player or an encoder-, so there must be some way to send a still frame to the video device.

void Renderer::PlayFrame();
// (Note that this should be either a protected function or only be enabled when
the video is paused; otherwise we could desync video and audio)


Now that I think of it, sending still frames is what the Video Playing does. Every N milliseconds, we send a frame to the
output buffer. So there must be separate seeks for video and audio.

void Renderer::SeekVideo(unsigned long time);
void Renderer::SeekAudio(unsigned long time);


And if we're seeking, there must be a way to tell if we're past the clip's duration.

bool Renderer::IsVideoEof();
bool Renderer::IsAudioEof();


And it seems we'll need separate video and audio functions for everything (edit: NOT!)

void Renderer::PlayVideo(float speed = 1.0);
void Renderer::PlayAudio(float speed = 1.0);
void Renderer::PauseVideo();
void Renderer::PauseAudio();
void Renderer::StopVideo();
void Renderer::StopAudio();

But I wonder if having separate stop functions would be good at all because of sync issues. I mean, if we don't
want the audio or video to be shown we just don't decode it. It's matter of seeking, decoding, and sending.
So PlayVideo and Play Audio will just disable video and / or audio, and will only need pause, stop.


void Renderer::PauseVideo(); SCRAPPED
void Renderer::PauseAudio(); SCRAPPED
void Renderer::StopVideo(); SCRAPPED
void Renderer::StopAudio(); SCRAPPED

I think that with this info we'll be able to design a good rendering / playback framework.
Stay tuned.

Wednesday, August 6, 2008

syBitmap finished! Now what?

The dev in charge of the Video playback controls is going to be away this week. So maybe it's time to start designing the Renderer API.

Which functions will it have? How will it tell the codecs to read a file? How to handle video and audio sync? How to handle the threads?

Too many questions, any help appreciated. Thanks.

Saya-VE without SDL, experiment 1

Finally, my efforts are beginning to show results. I realized that I had committed several mistakes (read-as: bugs) while implementing wxVideoPanel. While fixing them, I also improved the code a little.

Additional bitmap
The most important bug was trying to save time by not creating another buffer. This caused a crash when resizing the panel under certain conditions. By using another buffer for wxVideoPanel, and updating it from wxVideoOutputDevice::Renderdata(), I could finally be sure that the wxVideoOutputDevice's bitmap was not accessed at the wrong time. As a bonus, this meant that while the video is paused, I still keep a copy of the buffer (oops... now that I think of it, when resizing, the bitmap info is actually lost. I'll fix that soon).

syBitmapCopier
I finished implementing the syBitmapCopier class. Most functions are inline, so no stack space will be used when invoking them (well, some variables were required, but those are unavoidable).

syAborter
I also did some clean up. I moved all the thread functions to syBitmap. I replaced the VideoOutputDevice* pointer from syVODBitmap and replaced it with an syAborter* pointer. syAborter is an abstract class that has only one method: bool MustAbort(), which indicates if an expensive operation must be aborted immediately. Then, I made VideoOutputDevice and AudioOutputDevice subclasses of syAborter.

What this means: syBitmap has all the required functions to be thread safe, and integration with ANY VideoOutputDevice class will be a piece of cake.

Classes cleanup
Now that all the syVODBitmap functions were moved to syBitmap, syVODBitmap was no longer necessary, so I deleted it.

And now, ladies and gentlemen... the demo!

The last bug I had made was calling an expensive wxWidgets function inside a for(x)... for(y) loop. No wonder the display was so slow. But now the wxVideoPanel demo is fully functional. And here it is!


The Demo() function (actually, method) of wxVideoPanel, regularly creates a nice colored image or arbitrary dimensions (the ripples change every 5ms approximately) which is later scaled to fit in the panel's dimensions. This way, no matter if your video is 4:3 or 16:9, it won't be distorted.

After being created, the image is sent to wxVideoOutputDevice via the LoadVideoData() method. This method copies the data to its own bitmap, and then, in its RenderData() method, it calls wxVideoPanel::LoadData().

wxVideoPanel::LoadData() locks its own bitmap and pastes the data. wxVideoPanel's bitmap is locked because there are two other functions that access it (each one locks the bitmap as well) : OnResize, and OnPaint.

wxVideoPanel::OnIdle() checks if new data has been loaded, and calls OnPaint() if necessary. OnPaint() uses a wxBufferedDC to repaint the screen.

This way, we have our nicely colored image which changes in realtime without flickering at all. Ta-da!

Monday, August 4, 2008

From SDL to MyOwnVideoImplementation (TM)

After trying out the SDL video demo, I realized that for a video editor I won't be needing to use sprites, 3D textures or anything like that. It would be esier to write my own Bitmap buffer in memory. So I did, and I ended up creating the wxVideoPanel and wxVideoOutputDevice classes.

Unfortunately, the screen refreshing routines were awful. No, worse. They were hideous. I had to calculate everything manually, handle the pixel color spaces, etc. There had to be a better way. And hence, I came up with syBitmap: A cross-platform implementation of an in-memory bitmap. It has a virtual MustAbort() function which you can adapt for multi-threading purposes.
Currently I've been able to replicate the SDL example, but I was too busy and tired so I couldn't upload the screenshot.

As an added bonus, I created the derived class syVODBitmap (VOD stands for Video Output Device), which also has Lock() and Unlock() functions (also for multi-threading).

The best part is that I could add a PasteFrom function in syBitmap, so that the copying also scales and centers the source bitmap so it will fit the destination. Unfortunately, the implementation isn't as fast as I wanted because it uses floating point math. But I plan to replace it with fixed point math so copying won't become an overhead.

Still, the implementation is both ugly and slow. So I ended up creating another class (which I have yet to commit to SVN): syBitmapCopier. The idea behind it is this: Instead of having to calculate a pointer by multiplying y*width and then adding x, and then obtaining the color format of the pixels, we just init the class with the source and destination bitmaps, and these members are calculated only once.

I have designed functions to copy pixels and increment only the source pointer, only the destination pointer, or both. I've also designed functions to copy entire rows and advance either / both of the pointers by one full row. This way we'll have no worries about having to recalculate parameters for each pixel or passing them through the stack. Who knows, maybe I can inline all of these functions to get a super-efficient bitmap copier.

As soon as I finish the syBitmapCopier implementation, I'll make a multi-threaded demo to see how many frames per second I can get. And then I'll start making the Video playback UI, which is already overdue.