Skip to main content

Distributed development

-> -> Improvements to language bindings

The starting premise for the Ayyi project is that GPL’d audio software cannot succeed unless the modularity of its architecture is increased. There are good technical and usability reasons for this, but the primary reason is to support a distributed development model. With developers working in small teams and having diverse goals and opinions, the monolithic applications copied from the proprietory world have so far failed to achieve critical mass. This is not meant to belittle projects such as Ardour, Rosegarden or Muse in any way, but is just a recognition of the large challenges faced, and the continued increasing of expectations fueled by proprietory software products.

Many developers are not greatly interested in dealing with the headaches of slotting into a larger framework, but it is in many ways not in the best interest of the developer to spend time on writing code that is only useful to yourself. Because by making it useful to others, and thereby become part of something bigger, in the long term you make the code more useful to yourself. It becomes possible to concentrate on advanced functionality and pushing the boundaries rather than covering old ground writing boring support code.

One of the problems of modularity in this genre is that from the users point of view there needs to be a central focal point which everything revolves around. In existing software, this takes the form of an Arrange/Project window. Ultimately, it would be nice to make screen space here available to devolved services, similar to Microsoft OLE, but this is technically quite difficult to do. Perhaps something to consider later.

So the proposition is for this space to aggregrate the control of separate services. Some of these are nicely separated already - primarily synths, processing plugins, and routing. Jack and Ladpsa/Dssi are excellent examples to build on.

MVC

There is a natural split available along Model, View, Controller lines. Many audio programs are already internally architected like this.

It seems logical to preserve a single model, or ‘song’, containing central project information, and its audio and musical playlists etc. This needs to be ‘rendered’ both visually and in audio by record/playback engines for each media type, and one or more gui’s. Input can come from gui’s and other specialised controllers. So the split seems pretty clear cut, and experiments have shown that it actually is workable, so are there any disadvantages?

The idea is to add to the collection of run-time services that are available to ease development.

Ayyi uses a variety of independant processes performing specialised functions:

Most of these correspond to tasks internally seperated at the class or thread level in traditional applications anyway, so enforcing their separation isn’t neccesarily such a big step.

One client may provide more than one of these services.