According to good programming practices, first you need to work on a good design, and then start the implementation.
Well, for now, the project will just focus on being an editor front-end. This means that the video processing stuff will be held back for later.
Since my short-term goal is to have a workable user interface replicating the menus and dialogs of commercial video applications, I can assume that the design for these applications was well-done, and the only thing I need to do for now is to implement the dialogs and menus. When I stumble upon a wall (i.e. a dialog requiring information that I don't have yet), then I'll develop the infrastructure as needed, with the condition that I'll make it extensible to avoid getting stuck with a specific data structure.
So, if you were worried because I don't have (yet) a working UML design, rest assured I'm also working on a good design for the infrastructure.
So far, we have a class I'll call ProjectManager, which will handle project saving / loading, exporting, asking the user questions, etc.
ProjectManager will have a member m_project which will be of the class VidProject. VidProject is a container, and will have one or more of the following:
* Project Properties (title, framerate, preferred export format, codecs used, etc.)
* A double-ended queue (std::deque) of undo/redo states
* An std::vector of Sequences (timelines), which will have a vector of tracks each.
* A vector of Clips, and a vector of Clip indexes (to reuse deleted clip slots)
* A vector of Resources, which are the actual video clips (to be more precise, the info on how to retrieve such clips, i.e. filename, starting / ending frame, etc.)
Each clip will have (at least) the following information:
* id# for the origin (the resource used).
* Starting origin frame (in case of video, in case of audio we'll have samples - note that origin frames could also have starting / ending frames for the actual file used)
* Ending origin frame
* Loop count ( negative for infinite loops; 0 for no loops)
* enum: video before the first frame will be black? transparent? a copy of the first frame?)
* enum: same for the last frame
* Changeable duration in timeline frames (for speed up /slowdown of scenes)
* A vector of effects (the effect id will be an id# in case of built-in effects, or a string, in case of external plugins. To keep things simple, the effect parameters will be stored on an std::map
* If it's an audio clip, the id# for the corresponding clip of video, in case they're sinchronized.
* the id# for the starting transition (use 0 for none)
* the id# for the ending transition (use 0 for none)
* In case of audio tracks, the channel # (0 for first channel, 1 for second, etc - this will be defined later as the implementation gets done). In case of stereo and multiple tracks, this will be a vector where the destination tracks will point to the source tracks. For remixing tracks, mixing to mono, etc, there will be also stackable audio effects.
Tracks will be stored in a tree structure (children will be stored in a vector of track id#'s) where the root of the tree will be the final rendering. Again, we won't use pointers but indexes. To prevent recursion, each track will also have a level indicator.
Each track will contain a map from frame# => clip id#. We can use the maps to construct a per-sequence set (a set is a map of booleans) of transition frames. With these transition frames we can construct in real-time a list of states which tell us which frame contains which clip. With this info we can now render clips in the timeline.
Note that I won't plan to use pointers AT ALL. By using standard cointainers and local indexes, I can serialize the sequences into easily-storable strings for undo / redo states, and it'll be also easier to serialize the whole project into an XML string.