Improving JACK MIDI Out

classic Classic list List threaded Threaded
27 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Improving JACK MIDI Out

pedroseq
Hi, Maxim.

As you suggested, I decided to subscribe MuseScore's developer mailing list, to ask a few question and give some more suggestions, regarding our previous conversation through comments on your blog.

First, let me say that your latest achievements (possible Alsa MIDI support, MIDI channels combination for instrument changes, and addition of MIDI Actions for all instruments) are great for MuseScore's future. As you said yourself: "MIDI Actions have a great potential and implementing it can lead to big changes in MuseScore. Maybe it will even have an automation tracks for the midi parameters: volume, pan, chorus...".

I completely agree with your approach, and since we're talking about it, why not open the way for future implementation of automation tracks for any MIDI parameter (CCs, aftertouch, Pitch Bend, etc.)?

Another thing, will these MIDI Actions include Note On/Note Off events, so they can act as MIDI keyswitch triggers? If so, this may be a killer feature for using a keyswitched sample library, directly! Maybe even with customized instruments.xml files for the library, everything could be automated, no, what do you think? Another possible use for such "hidden" Note On/Note Off messaging could be the possibility of allowing the playback of cluster chords with proper notation (not yet available in MuseScore) and those ornaments more complex present in the "Articulations & Ornaments" pallete, that aren't played back in current version.

This new MIDI implementation might also bring back another thing that has been forgotten for sometime now. A few years ago, there was a DAW project, OOMidi, whose developers had the idea in their develpoment roadmap to integrate/interface their software (as main DAW application) directly with MuseScore (as main scoring application). But that project seams to have died (unfortunately, otherwise it may have become the best Linux DAW), and the integration idea was lost. Maybe now, with all this MIDI development, it may be possible to rethink it, and contact other DAW development teams to consider the same scenario, no? What do you think? That would solve many issues, regarding future playback/mixing development, which could be treated "in the shadows" by the DAW application, as "playback/mixing host", having a direct MIDI "pipeline" straight from MuseScore. Thus, future focus could be kept once again in pure notation development, but now with advanced MIDI possibilities.

Now, for something more down to Earth, on your posts in the blog you mentioned that you changed MuseScore's file format to adapt it for the new features. The other day, I tried to open a test file I made with MuseScore 2.0, a nightly build prior to your changes, and it crashed a more recent nightly build. But when I opened another file (a self-made orchestral template) not containing any notes/notation, other then the staves with instruments names, all went fine. Do you think there is a connection?

I also have some other possible "bugs" that I'd like to address, regarding MuseScore plugins (the ones I downloaded from MuseScore's site don't load in version 2.0, because they're ".js" and not ".qml") and custom time signatures not appearing in sidebar pallete. Where should I talk about these, should I post here, or on the Forum?

As my final comment for now, have you or any other developer seen carefully this other MuseScore related project, from Emile Ellberger: http://blog.zhdk.ch/ssmn/, that has custom made palletes for MuseScore 2.0, for sound spatialization? For me, this seems like a modern composer's "heaven on earth"!! The only draw back is the fact that it is only for MacOSX. Could any developer consider asking them to share their research project with MuseScore's main release version, so that all Mscore users could benefit from this wonderful tool, maybe in a future 3.x version, or so? This could send MuseScore skyrocket in usage possibilities, that no other softwares (free or paid) have and would really show the power of open-source software development, when people unite their ideas and share their knowledge!

Best regards
Reply | Threaded
Open this post in threaded view
|

Re: Improving JACK MIDI Out

igevorse
Hi all.
Sorry for the late answer.
> The corresponding 32bit nightly got no further in its host lubuntu
> 14.04 than the spashscreen before dying.)"
Without logs I can't say anything. I guess you need to install some additional libraries related to Qt5.

> JACK had been added and was working with qjackctl  in linuxmint17 also,
> but settings in the mscore did not yield sound except with solely *internal
> synthesizer...PulseAudio*, and that only after adding soundfont
> FluidR3_GM.sf2 in the mscore nightly's Synthesizer pane.
You can try a new nightly build. JACK should be work properly now. Internal synthesizer is linked with JACK MIDI Out (we have to know program list) and JACK Audio (we have to make sound before passing to JACK), so you need to add soundfont in MuseScore anyway.

> JACK works nicely with many apps --  but not with musescore 1.3 or earlier
Yes, JACK support in MuseScore 1.3 was not fully implemented. Actually we have only JACK Audio working in 1.3. However, my work on JACK was not related to MuseScore 1.3, only for 2.0. There was a post on development list: we will try to release a first beta before the end of August. You could try it if you don't want use nightly builds.

> I completely agree with your approach, and since we're talking about it, why
> not open the way for future implementation of automation tracks for any MIDI
> parameter (CCs, aftertouch, Pitch Bend, etc.)?
I just made an example of automation tracks for volume, but actually it could be done for any midi parameter. But this feature is complex and it could be not easy to implement. As you see, developers now focused on fixing bugs and preparing for the release of MuseScore 2.0 beta 1. So this feature would not be implemented soon.


> Another thing, will these MIDI Actions include Note On/Note Off events, so
> they can act as MIDI keyswitch triggers? If so, this may be a killer feature
> for using a keyswitched sample library, directly! 
> Another possible use for such "hidden" Note On/Note Off
> messaging could be the possibility of allowing the playback of cluster
>chords with proper notation (not yet available in MuseScore) and those
> ornaments more complex present in the "Articulations & Ornaments" pallete,
> that aren't played back in current version.
I don't understand the importance of including Note On/Off events in MIDI Actions. If you want to make a new kind of notation, it's better to write it in code or make a plugin. As I understand, you want to place some elements on the staff that would not produce midi events. And you want to make midi messages by yourself, right? It could be too complicated to keep all in sync if changing elements would not affect midi events. Also, I don't know what is keyswitched sample library :(.

Maybe now, with all this MIDI development, it may
> be possible to rethink it, and contact other DAW development teams to
> consider the same scenario, no?
I didn't know before about OOMidi, but seems like it could be a great project. As I said, developers focused now on the beta, so I didn't think they would rebuild MuseScore to integrate with DAW now. Also, since project died, MuseScore developers should make two projects: OOMidi and MuseScore. Anyway, I hope they read this and will recall about OOMidi after beta1beta2/release/somewhen.

The other day, I tried to open a test file I made with MuseScore
> 2.0, a nightly build prior to your changes, and it crashed a more recent
> nightly build. But when I opened another file (a self-made orchestral
> template) not containing any notes/notation, other then the staves with
> instruments names, all went fine. Do you think there is a connection?
There is no connection with me. The code I develop is in different branch, and it would not be merged to the main (master) branch before I finish testing. Only after this we will merge it.
Regarding your problem - there could be a bug in MuseScore. Also MuseScore may crash if there are unknown tags in the score. It could happen if you save file with today nightly, and try to open it in yesterday nightly. Any new feature can slightly change MuseScore format, so it's better to use fresh nightly builds.

> I also have some other possible "bugs" that I'd like to address, ... 
> Where should I talk about these, should I post here, or on the Forum?
If you noticed a bug or you have a feature request, it's better to write to an issue tracker [0].

> Could any developer consider asking them to share their research
> project with MuseScore's main release version, so that all Mscore users
> could benefit from this wonderful tool, maybe in a future 3.x version, or so? 
It's really a great project. We have already talked with them. I can't say anything about joining/merging our code, but MuseScore should be ready for this. Yes, I am talking again about fixing bugs and preparing for the beta.



------------------------------------------------------------------------------

_______________________________________________
Mscore-developer mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/mscore-developer
Reply | Threaded
Open this post in threaded view
|

Re: Improving JACK MIDI Out

igevorse
Also I wanted to say about assigning midi channels. Now Michael, David, pedroseq and other people would be happy :).
If you read my blog posts, I published [0] my implementation of "Assigning MIDI channels" feature. You can find my pull request here: [1]. Also this pull request contains an "Improved Instrument Change" feature proposed by pedroseq. You can read about this feature here: [0] [2].
Below I would assume "assigning channels" and "improved Instrument Change" as a one feature.
As I wrote in my post it is a complex feature, so I am afraid I could broke something. That's why I need to test it before merging. Yes, it works well on my tests and when I am working with score. But the feature may affect a lot of areas which I can't assume.
If you want to speed up merging you can download my branch, compile it and play with it: open scores, create staves, remove parts and notes. Try to use new MIDI features: assign MIDI channels to instruments, make same/different channels, do whatever you want! 
And if you found a bug feel free to write here or on the issue tracker (but mention you used not master branch).

Hope we can do it together!


------------------------------------------------------------------------------

_______________________________________________
Mscore-developer mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/mscore-developer
Reply | Threaded
Open this post in threaded view
|

Re: Improving JACK MIDI Out

pedroseq
Hi Maxim,

> I just made an example of automation tracks for volume, but
> actually it could be done for any midi parameter. But this
> feature is complex and it could be not easy to implement.
Yes, I understand that, right now, the focus is beta testing MuseScore 2. I was just suggesting to pave the way for this feature in a future version. Out of curiosity, how complex do you think it would be to implement this feature? What considerations should a developer have in account? Maybe, with time, I may try to help implementing this.


> I don't understand the importance of including Note On/Off
> events in MIDI Actions. If you want to make a new kind of
> notation, it's better to write it in code or make a plugin.
> As I understand, you want to place some elements on the
> staff that would not produce midi events. And you want to
> make midi messages by yourself, right? It could be too
> complicated to keep all in sync if changing elements would
> not affect midi events. Also, I don't know what is  
> keyswitched sample library :(.
My idea is to place elements on the staff that can produce "hidden" midi events (user hears the result without seeing the MIDI tweakings behind it, like when playing back slurred notes - note length and attack/velocity are changed, but not visibly).
For instance, a diatonic cluster chord from C to B (same octave), would comprehend all white keys from C to B. The classical way to notate this is to write all natural notes (C, D, E, F, G, A, B) vertically, like any other chord. But the modern way is to write just the lower and the higher notes and use a vertical line to unit both.
Therefore, using "hidden" MIDI Note On/Off events would allow to play back all seven notes, while only showing two (lower and higher), without the need to create/develop a new notation representation, since notes and vertical line symmbols are already present in MuseScore.
Regarding keyswitchting and sample libraries, first let me explicit some differences between GM/GS/XG soundbank and dedicated sample libraries. A GM/GS/XG soundbank loads all instruments sounds into memory, making them readily available and allowing the user to change from one instrument to another through Program Changes (using CC0 or CC32 MIDI controls). The inconvenience is the limited number of different articulations/techniques available for each instrument (in GM, Strings only have sustain, pizzicato and tremolo). Additional techniques (staccato, detache, e.g.) are only available through emulation, by tweaking MIDI CCs and trying to reproduce the characteristic sounds of those techniques.
A sample library has dedicated samples for the most common techniques of a given instrument, rendering them much more realistic than with a soundbank and the number of different techniques available is quite larger. But, the drawback is that instrument changes aren't so simple, since each instrument has to be loaded individually and they can't be changed through Program Changes (sample libraries don't usually respond to CC0 or CC32).
So, with a sample library, each different instrument must have its own MIDI channel. And how do we change from one articulation to another? Through keyswitches: MIDI keys/notes outside a particular instrument's range, which act as triggers to change between different sample layers. For instance, a good Violin sample library has multiple velocity sample recordings: a note, within the violin's range (starts on G3), played very soft activates the "pianissimo" sample layer, played mediumly hard activates the "mezzo-forte" sample layer, etc.
To change from "sustain" to "staccato", or pizzicato, we press e.g. C0, C#0 or D0 (therefore outside the violin's range), as keyswitches, on a MIDI keyboard (real or virtual), or we place (and hide) on the score the note that activates the desired technique. These notes aren't meant to be seen, they just serve to activate a specific technique and this is a common approach in nowadays sample libraries. Given this, imagine that I want to change from sustain (default) to pizzicato, when using a certain sample library. I'd like to write "pizz." using staff-text, go to "Staff-Text Properties"->"MIDI Actions" and incert a NoteOn event for C#0, followed by a NoteOff event, thus activating the pizzicato technique. Is this logic to you?



> I didn't know before about OOMidi, but seems like it could be
> a great project. As I said, developers focused now on the
> beta, so I didn't think they would rebuild MuseScore to
> integrate with DAW now. Also, since project died, MuseScore
> developers should make two projects: OOMidi and MuseScore.
> Anyway, I hope they read this and will recall about OOMidi
> after beta1beta2/release/somewhen.
OOMidi (or OOStudio) was a very promising project, whose last stable version is now only available on the KXStudio - a Ubuntu/Debian based distribution - repository (maintained by "falkXT"; he's very helpful, so he may be interested in participating on such an endeavour). But other actively developed DAWs could also be options (MusE, Ardour, LMMS,...), who knows, for MuseScore 3.x?

> It's really a great project. We have already talked with them.
> I can't say anything about joining/merging our code, but
> MuseScore should be ready for this. Yes, I am talking again
> about fixing bugs and preparing for the beta.
I took the liberty to contact Emile and he seemed happy about the interest in their project. I think they might agree with a "quid pro quo" cooperation. They share their code, while getting more developers for their project. It would definitely be a very good feature to incorporate in MuseScore, not found in any other notation software. But, of course, first things first: take care of MuseScore 2 bugs and beta releasing!

Unfortunately, I'm on vacation right now and until the end of the month, so I'm unable to compile and test your brench of MuseScore, with its new MIDI port/channel and improved InstrumentChange features :(. But I hope many others my be able to do it, prior to the first beta release.


Best regards
Reply | Threaded
Open this post in threaded view
|

Re: Improving JACK MIDI Out

igevorse
Hi pedroseq,

> Out of curiosity, how complex do you think it would be to implement this
> feature? 
I can't say exactly is it complex or not. It's not only about implementing automation tracks as "window with bezier curves linked to the score". We have to decide how to link this new feature with existing controls. Let me explain it: for example, we have dynamics that handles volume change. We could make an automation track for volume, but should it be linked to dynamic marks? Should we re-make our bezier curve? What if user changes bezier curve, should we add a new dynamic marks to the score?
Another situation with chorus and reverb: as I understand, developers want to hide chorus and reverb controls from mixer windows as redundant. So, to implement automation tracks for these parameters we should re-implement the base (controls has no effect on internal sequencer now).
So, automation tracks should be linked with existing parameters and controls.

Is this logic to you?
Yes, now I understand that hidden noteon/off messages are helpful in this case.

About my test branch: I started to make separate pull requests with 100% working code, so I hope my PR would be partially merged.

------------------------------------------------------------------------------
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
_______________________________________________
Mscore-developer mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/mscore-developer
Reply | Threaded
Open this post in threaded view
|

Re: Improving JACK MIDI Out

ChurchOrganist
igevorse wrote
Another situation with chorus and reverb: as I understand, developers want
to hide chorus and reverb controls from mixer windows as redundant. So, to
implement automation tracks for these parameters we should re-implement the
base (controls has no effect on internal sequencer now).
So, automation tracks should be linked with existing parameters and
controls.
Just a word about chorus and reverb controls. IME these controllers are usually set at the beginning of a score and not altered in realtime. To me it would make sense to have these controllers as part of instrument setup.

Once we have MuseScore 2 out I shall be doing more work on my proposal to incorporate the mixer in the Create instrument dialogue, but on a temporary basis I am looking at greying out Reverb and Chorus controls by default and enabling them if JACK is selected as the output. I know how to do this - I just need to work out where the relevant bits of code need to go :)
Regards
Michael
Reply | Threaded
Open this post in threaded view
|

Re: Improving JACK MIDI Out

pedroseq
In reply to this post by igevorse
Hi Maxim,

Regarding your replies:


> I can't say exactly is it complex or not. It's not only about implementing automation tracks as "window
> with bezier curves linked to the score". We have to decide how to link this new feature with existing
> controls. Let me explain it: for example, we have dynamics that handles volume change. We could make
> an automation track for volume, but should it be linked to dynamic marks? Should we re-make our bezier
> curve? What if user changes bezier curve, should we add a new dynamic marks to the score?

Lets reason from scratch on this matter: why do sequencers need automation tracks? Sequencers, in general, don't provide dynamic/articulation markings capable of affecing playback as notation editors do. Therefore, they provide automation tracks to allow the user to manipulate playback by tweaking, or fine tuning, MIDI parameters/control changes in a faster, less "tedious" and more "continuous" way, right?
Musescore is a notation editor, so all notation elements present on the score should take precedent over other MIDI editing methods. For example, one way to achieve a legato effect is to slighlty extend/overlap a previous note with the next one (this is the basic "sequencer way" of producing legato). So, lets say we write two quarter notes in the score and we want to achieve legato by tweaking the extension/overlapping amount in the piano roll window. A "smart" score should continue to show 2 quarter notes, although the first is slightly longer than the second. With other MIDI parameters/controls, it should be the same, in my opinion: they're only there for fine-tuning playback.
For instance, if I place a "mp" mark on the score, that's because I intend to have a note Velocity (or MIDI CC11, preferably) around 64, according to a predefined dynamic markings to velocity level chart. This value could be locked in the automation tracka and showing a different color handle, for this particular marking (other values for other markings), making it a vertex. It should only be altered by changing it in the Inspector window. If I wish another dynamic instead, say "ff", then I place it on the score and a new locked value/vertex appears substituting 64. But what I may want to tweak, is the dynamic behaviour around the original "mp" marking. As you know, human playback performance is very subjective.
So, lets say I wish to produce some slightly audible, but "invisible" crescendos/diminuendos prior to, or after, the "mp" marking. Thats where automation tracks would come handy. The same goes for string vibrato amount (seldomly showed explicitly in scores), or piano Pedal markings, which in many scores only appear explicitly on the first measure(s), although their effect is intended to apply to the remaining score.
I think that, to start with, automation tracks might only allow linear and exp/log curves (lets say, with 10 curvature degrees each), instead of true bezier curves (much harder to program), since, in my opinion, most live performances would be well simulated by those simpler approaches.


> Another situation with chorus and reverb: as I understand, developers want to hide chorus and reverb
> controls from mixer windows as redundant. So, to implement automation tracks for these parameters we
> should re-implement the base (controls has no effect on internal sequencer now). So, automation tracks
> should be linked with existing parameters and controls.

I think that reverb and chorus automation tracks should also exist. See my reasons on the reply below.


@Michael/ChurchOrganist
> Just a word about chorus and reverb controls. IME these controllers are usually set at the beginning of a
> score and not altered in realtime. To me it would make sense to have these controllers as part of
> instrument setup. Once we have MuseScore 2 out I shall be doing more work on my proposal to  
> incorporate the mixer in the Create instrument dialogue, but on a temporary basis I am looking at greying
> out Reverb and Chorus controls by default and enabling them if JACK is selected as the output. I know
> how to do this - I just need to work out where the relevant bits of code need to go :)

Regarding the use of Reverb/Chorus controls, I would like to leave my thoughts on the subject here. As you say, Reverb and Chorus are usually set at the beginning of the score and left untouched, because, in many live performances, the player seats in its place, within a given room, and doesn't move. But then, by logic, the same would apply to Volume and Panning.
Also as Maxim, has pointed out some times to me, when using synthesizers and electroacoustic instruments, static MIDI controls aren't entirely desired, since in those the player may changed any MIDI parameter/control at his will, and Reverb and Chorus are just two of them. In addition, considering, for instance, the SSMN Musescore derived project, where players are expected to move around the room, one way to emulate the displacement effect with MIDI controls, would be through the careful variation of (at least) Reverb, Panning and Volume.
This being said, I don't think that there is much need for Reverb and Chours to have their own knobs in a separate mixer, as long as Musescore maintains the hability to change Reverb and Chorus settings via Midi Actions/ automation tracks, or something alike.

Best regards.
12