This is why we are taking a hybrid approach. It doesn't make sense to make people recreate the wheel every time they want to make a simple behavior. It also doesn't make good production sense to have to parse a spaghetti mess every time someone wants to change something, share behaviors, or debug/optimize someone else's effect. Finally, projects need to be able to choose whether they lock down all features and expose subsets of behavior to artists for total control/efficiency, or whether the needs of the project are such that every behavior is custom and nothing is shared.
We are looking at tiers.
1) Instanceable, reusable and inheritance driven emitters which have parameters that bubble up to production artists and can be combined together in systems which share data and can communicate with each other either by system level scripts or arbitrary event payloads
2) Common "module style" behaviors which all accept common inputs, outputs, and particle attributes and read from/write to the same data. Taking this approach means common behaviors and needs are made once and then reused forever, and can be optimized project wide. Additional behaviors can be added without adding more particle memory/complexity than necessary and can "stack" with each other (put down two modules that deal with forces and they will correctly sum and affect velocity, for example).
3) Fully programmable distributions and data interfaces which talk to common particle parameters. This removes the need for a) hardcoded distributions (uniform range, etc) and removes the need for module granularity (initial color vs. color over life go away in favor of "set color" or whatever). This also removes the need to make custom modules for much of the complex behaviors that would normally require making a new graph, instead just "talk" to the data you want to modify without changing the underlying module behavior (multiplying velocity by delta time and adding to position, for example).
4) For the power user, full access to the entire set of functions to "roll your own" behavior in a graph and have it stack nicely with the existing behaviors, hold onto a particle payload forever or use transient data for just that frame, whatever.
5) Abstraction between simulation and rendering should enable new, better features. There's no reason you can't simulate and then use that say in volume rendering, physics, gameplay forces, reading and writing volume data, whatever. We also fail if data interfaces aren't completely arbitrary, say sourcing via external data like CSV, or in-engine data like static or skeletal meshes, gameplay events, and so on. Arbitrary struct in, arbitrary behavior out, full data sharing between all systems in between.
Lastly, if it's not faster to make all this stuff than it used to be, more efficient to simulate, and more full featured to render, then we screwed up.