OpenFX plug-in programming guide (C/C++)
1st draft 08/24/2014 Alexandre Gauthier-Foichat
Refined with Pierre Jasmin notes on 02/07/2014
Table of contents
Introduction
Presentation of the Support layer
Walkthrough : The Invert plug-in
Introduction
This guide is intended for confirmed C++ developers who want to develop OpenFX plug-ins using the Natron fork of the OpenFX standard: https://github.com/NatronGitHub/openfx.
Note that the Natron fork is 100% compliant with the original repository and using this guide ensures that your plug-in(s) will work across all hosts.
First off, the main official specification can be found right there http://openfx.sourceforge.net/Documentation/1.3/ofxProgrammingReference.html and should be followed very precisely when in case of doubt. This document is 100% true and answers almost all questions one might have. However this is a long document which can be quite hard to swallow the first time one has to read it.
I'll skip all the details regarding packaging of OpenFX pug-ins and the philosophy and focus this document on the programming of OpenFX plug-ins using the Support layer.
What is the Support layer?
OpenFX is in under the hood just a protocol for a plug-in and a host application to help communicate.
They do so with a C API using blind handles and properties which are uniquely identified by names, which all begins with kOfx* . The reason why C language is used is because the C++ API is sensitive to compiler versions.A plugin compiled using the base C API will work fine in Natron, this is about documenting the C++ wrapper to develop plugin for Natron.
To help using this API, the official guys from the OpenFX association have made a C++ wrapper around it so it is easier to use. The wrapper on the plug-in side is called the Support layer and the one on the Host side (the application, e.g. Natron) is called the HostSupport layer.
The official repository of the OpenFX association is actively maintained though their C++ layers contain bugs and
missing implementations . That's why the Natron dev team has forked it and continue maintaining it and fixing bugs whilst also incorporating new features of the newer versions of OpenFX.
The official repository of the OpenFX association can be found here: https://github.com/ofxa/openfx
The repository the Natron dev team maintains can be found here: https://github.com/NatronGitHub/openfx
On this repository you will find the 2 laye`rs mentioned above in separate folders with their respective names.
The Examples directory contains OpenFX plug-in examples which were programmed using directly the C API (thus not using the Support layer).
The Support folder contains a Plugins folder which contains other examples. These examples are on the other hand programmed using the Support layer.
The Natron dev team has 2 separate repositories for its OpenFX plug-ins.
One is to handle all plug-ins which do input/output operations and rely on external libraries (such as OpenImageIO, OpenColorIO,FFMPEG,OpenEXR, etc…) and can be found here: https://github.com/NatronGitHub/openfx-io
The other one is for all other plug-ins which do image processing. They do not require linking to any other external library and are generally easier to compile and understand. The address of that repository is: https://github.com/NatronGitHub/openfx-misc
Depending on what kind of plug-in you implement you should base your work upon one of the 2 repositories above as they contain "state of the art" OpenFX plug-ins. They use the OpenFX API as one should do.
If you were to create a new reader plug-in to read whatever you need that isn't supported already, I suggest that you fork openfx-io and derive the GenericReader class which does all the quite complex handling of what a fully-featured reader plug-in is expected to do (such as downscaling, color-space transformation) . Same for writer plug-ins, I suggest you derive the GenericWriter class.
Note that originally, Readers and Writers are not part of the OpenFX standard, and to implement the I/O plug-ins, Natron is using the TuttleOFX Reader/Writer context extension . As we will see later a plugin specifies in which context it can run.
If on the other hand you were to write an image processing plug-in, I suggest you fork the openfx-misc repository and look at the plug-ins inside as examples. The Crop and Invert plug-ins are trivial plug-ins which should give you a fair understanding on how OpenFX works.
Architecture of a plug-in
A plug-in is a folder on Windows/Linux and a bundle on MacOSX which can be represented as follows:
MyPlugin.ofx.bundle/
Contents/
Win32/
MyPlugin.ofx
Win64/
MyPlugin.ofx
Resource/
MyPlugin.svg
The .ofx is just a .so on Linux, a .dylib on MacOSX and a .dll on Windows whose extension has been renamed to .ofx.
For an extensive explanation of the details of how the bundle should be setup, please refer to
http://openfx.sourceforge.net/Documentation/1.3/ofxProgrammingReference.html#id449875
Communication between the host and a plug-in
Each plug-in can be instantiated in different context by an application depending on its use.
They are well described in the OpenFX specification and won't go through them here.
http://openfx.sourceforge.net/Documentation/1.3/ofxProgrammingReference.html#ImageEffectContexts
The OpenFX spec. defines functions that the host can call on the plug-ins to make it perform special actions. These functions are called actions and their specification is well described here:
http://openfx.sourceforge.net/Documentation/1.3/ofxProgrammingReference.html#id473661
The most important action to implement in general is the render action which is called when a plug-in needs to render its image. We will detail a bit more the important actions in the chapter dedicated to the plug-in object.
The plug-in can also call some functions on the host to query some information. These functions are grouped in "suites". Generally the type of things you would like to ask is "fetch that image", "fetch that image's size", etc…
The idea of suite is to facilitate evolution, Over time a suite can be versioned and a version supported from an API version.. As such the Support Layer matches an API version.
Using the C++ Support Layer
A C++ plug-in is composed of 3 objects in the general cases:
The factory object which is used to instantiate the plug-in to the host application at the moment the binary is loaded.
The plug-in object which is used to communicate with the host application and do some work. Generally the processing is not done in this object, rather we do it in the last object…
The processor object which is used to do the processing in an optimised way: OpenFX offers a way to do multi-threading easily using this class.
The last class is not mandatory and one could also do the rendering in the plug-in class, though it would not be multi-threaded.
The factory
It basically serves 3 purposes:
Instantiating the plug-in object.
Declaring some parameters/clips: This is done in the describeInContext function. The parameters are what the user *could* (generally can, but not if they are hidden/disabled) interact with in the user interface. They are several types of parameters and they are quite well described in the specification:
http://openfx.sourceforge.net/Documentation/1.3/ofxProgrammingReference.html#ParametersChapter . What you can control is whether the user can animate them, their name, default value, etc… Please check out the examples in the openfx-misc for implementation details.
The clips are the objects that refer the input images (in Natron a clip is the arrow between 2 nodes). This is a view of the plug-in of another input plug-in. The plug-in must always define an output clip: This is where the output image of this plug-in will be defined.
Declaring some properties of the plug-in to the host application. This is done in the describe function. This is where the plug-in defines its name, which bit depths it can support,, whether it can support multi-resolution images, whether it will need fetching images at different times, whether it supports interlaced images, whether it supports multi-threading…. etc
See the invert plug-in describe function here https://github.com/NatronGitHub/openfx-misc/blob/master/Invert/Invert.cpp
for an example of the function.
All properties defined in the describe function are well covered by the OpenFX specification though some need some extra caution:
- kOfxImageEffectPluginRenderThreadSafety : This must be carefully set. If misused, then your plug-in might not be thread-safe.
Unsafe means that all instances of the same plug-in can only have 1 render-thread at once: they will all be synchronised.
InstanceSafe means that an instance can only have 1 render thread at a time, though several instances do not need special synchronisation.
FullySafe indicating that any instance of a plug-in can have multiple render threads running simultaneously.
On top of that the plug-in can also set the property kOfxImageEffectPluginPropHostFrameThreading : When set to 1, and if the thread-safety of the plug-in permits, the host will slice the render window and call it with several threads instead of calling the render function with only 1 thread.
Bear in mind that you as a plug-in do not need to set this to 1 since you can use the multi-thread suite and do the multi-threading yourself. This suite also includes locking abilities allowing you to properly control the thread-safety of your plug-in.
- kOfxImageEffectPropSupportsMultiResolution : When set to 1 that means your plug-in is expected to work with arbitrary image rectangles in input and output and they may not necessarily be the same.
- kOfxImageEffectPropSupportsTiles : When set to 1 that means your plug-in is expected to work with images that are not the "full image" but just a sub-rectangle of the full image. Any per-pixel process is a candidate for this
The following behavior is expected:
1) If a plugin returns an error during kOfxActionLoad it should prior to returning clean up any global memory it will have allocated using the OFX memory suite. The plugin will then not show in the UI.
2) If a plugin returns an error during Describe it expects the host to call kOfxActionUnload and the plugin will not show up in the UI.
3) In all other cases plugin should release everything during the kOfxActionUnload. Otherwise during the system dynamic library destruction the OFX handles will probably be gone.
To create a group of plug-ins that will all show up under the same menu in the user interface, you need to set the kOfxImageEffectPluginPropGrouping property that is a string containing the menu group under which you want your plug-in appear, e.g. “Transform” or “Color/Transform”
The plug-in object
This is the main object allowing to bridge with the application. When implementing your plug-in you will want to derive the OFX::ImageEffect class. There are a bunch of virtual functions which are the actions the host application can call. (The functions I mentioned above).
All communication occurs via an action handler which the mainEntryStr function in ofxsImageEffect.cpp.
The constructor of the plug-in is here to fetch a pointer to all the parameters/clip you defined previously in the describe function of the factory. These pointers represent the "instantiated" version of those parameters/clip, whereas in the describe function well…you just described them so the host would instantiate them correctly. This is with those pointers that you can get/set values and query information.
The output clip (sometimes called dstClip_ in our examples) represents the output image.
The source clip represents the source image and this is from this object that you need to fetch the input image.
If you were to have several input clips, then you would fetch the input images from each of your source clips.
Some clips can be optional (such as a mask for example) and needs to be set so explicitly in the describe function of the factory.
In the following I will vaguely re-explain the main actions that are generally implemented by a plug-in. Though for more detailed explanation and how to report errors in these functions please check the OpenFX specification which does a full coverage on them.
The isIdentity action
virtual bool isIdentity(const IsIdentityArguments &args, Clip * &identityClip, double &identityTime) OVERRIDE FINAL
This function must return true if the effect in its current state will not apply any change to the source image. This is called by the host to determine whether a call to the render action is necessary or not.
When true then the rendering pipeline is much faster as the host just skips this plug-in from the compositing tree.
The identityClip parameter must be set to a pointer of the input clip of which the effect is an identity of.
The identityTime parameter must be set to the time at which the effect is an identity of the input clip.
For example a gain effect whose "Scale" parameter would be set to 1 would be an identity.
The changedParam action
virtual void changedParam(const OFX::InstanceChangedArgs &args, const std::string ¶mName) OVERRIDE FINAL;
This function is called every time a parameter is changed. This function can be called either because you set the value of a parameter programmatically or because the user interacted with the parameter.
the args.reason parameter will inform you from what this function was called.
For example if you had a button parameter, when the user would press it, it would call this function with the args.reason == eChangeUserEdit.
This function is also a great place to show/hide and enable/disable other parameters according to special values of another parameter.
This function could be used to implement analysis effects (such as a tracker). Fetching an image is allowed in this action.
This is also where parameter writing is supposed to be done. For example one can fetch an image, find the most popular color and write it in the parameter, and the next render action will have that parameter value updated. Data can also be stored in the InstanceData Pointer, but they would have to be saved during the syncPrivateData action.
The getRegionOfDefinition action
virtual bool getRegionOfDefinition(const OFX::RegionOfDefinitionArguments &args, OfxRectD &rod) OVERRIDE FINAL;
This is called by the host to determine the size of the image (or region of definition) produced by this effect.
If your plug-in doesn't apply any geometrical transformation to the image, then it is probably not modifying it's size (e.g. an Invert plug-in doesn't modify the image's region of definition.) In that case you do not need to implement this function, this is the default behaviour to return the region of definition of the source clip.
On the other hand, a crop effect would return the size of the crop area as its region of definition.
The getRegionsOfInterest action
virtual void getRegionsOfInterest(const OFX::RegionsOfInterestArguments &args, OFX::RegionOfInterestSetter &rois) OVERRIDE FINAL;
Even though the name is close to the getRegionOfDefinition action, it doesn't serve the same purpose at all!
This function is called by the host before the render action is called. This is called when it wants to pre-render the input images this effect might need. In order to do so, the host needs to ask this effect what is the rectangle of the source image we're interested in. This is the purpose of this action. In general if your plug-in does need exactly the render window then you don't have to implement this function. On the other hand , a blur plug-in might have to include the border padding to the region of interest of the input clip.
The getClipPreferences action
virtual void getClipPreferences(ClipPreferencesSetter &clipPreferences) OVERRIDE FINAL;
You rarely need to implement this action. This action is called by the host to allow the effect to perform modifications of the state of the clip, such as its premultiplication (is the image premultiplied or not?) or the image components (is it alpha, RGB or RGBA?) or image bit depth (is it byte, short or float ?)
For example the shuffle plug-ins uses it when the users chooses different output components or different bit depth.
The render action
virtual void render(const OFX::RenderArguments &args) OVERRIDE FINAL;
This is where the processing must be done. The args contain several parameters that define how the image should be rendered.
time: The time at which the render is taking place. This can be used to fetch the input images at the same time or at other times.
renderScale: When different than 1 this informs that the image is rendered at a lower resolution than the full resolution. For example this is used in Natron when the user zooms out. As a filter plug-in , this is merely an hint and doesn't hold much value. On the other hand for a reader effect (such as the ones in openfx-io) this is clearly stating at which scale you should read the image. In this case you should explicitly downscale the image yourself (ONLY FOR READER PLUG-INS!!). If you don't support downscaling the images, then in all the actions called by the host, you're expected to check the render scale parameter and throw a kOfxStatFailed exception if the render scale is different than 1. This can in turn inform the host that you don't support the downscaling of the images and it will take care of it for you.
renderWindow: This is the portion of the image to render. If you specified that you support tiles in the describe function of the factory then it might not be the full region of definition of the image.
To fetch input images, call the fetchImage on the source clips at the desired time. You're expected to check whether the input clip is connected before fetching the image (i.e: call getConnected() on the clip). A clip is connected in Natron when the arrow is connected to another effect.
If you cannot use the input image for any reason (bad bit depth, bad components, etc…) then your expected to throw a significant exception to indicate that the render failed (kOfxErrBadFormat).
You can only fetch an image during the render action or the changedParam action
Generally a plug-in is better if it can handle arbitrary bit depths and image components. To deal with that in our plug-ins we template the internal render function by the components and the bit depth.
For example in the Invert plug-in of the openfx-misc repository, the render function just instantiate the templated class ImageInverter with the good template parameters depending on the bit depth and the image components.
In this example we use the processor object (ImageInverter) to do the rendering because it enables the multi-threading offered by the host, but we could clearly do the processing in the render function, though, it wouldn't be multi-threaded (unless we would have set the kOfxImageEffectPluginPropHostFrameThreading property to 1).
The getTimeDomain action
virtual bool getTimeDomain(OfxRangeD &range) OVERRIDE FINAL;
This action is called by the host to figure out what is the frame range over which the plug-in has an effect. For instance a Reader plug-in would return the length of the image sequence, or a rectangle generator could return a firstFrame-lastFrame pair.
Note that in most cases, the default implementation is fine, which is to have effect over the union of ranges of the input effects.
The getFramesNeeded action
virtual void getFramesNeeded(const OFX::FramesNeededArguments &args, OFX::FramesNeededSetter &frames) OVERRIDE FINAL;
If you were to write a plug-in that needs images at different times to do its processing, e.g. a Retiming plug-in that would need the image at T – 1 and T + 1 in order to produce the image at frame T, you need to implement this action.
This is called by the host to figure out what image you will need to render so it can pre-render them. You should
in turn specify exactly what images will be fetched with the fetchImage(...) function in render.
Note that by default, when implementing a simple effect such as Invert, you only need the source image at the current time, the default implementation handles it for you so you don't have to implement it.
The processor
The host application can implement the multi-thread suite. Remember, a suite is a set of functions that a host can implement, offering some functionalities to the plug-in. In this case this suite is designed to offer SMP style multi-processing.
The C++ processor class is just a wrapper around this suite so that it is easier for you, as a plug-in developer, to multi-thread your processing. You would need to derive the OFX::ImageProcessor to craft your own processor.
The only function you have to override is the multiThreadProcessImages function. This is the core render function which renders an image rectangle for a single thread. You can cycle through all the examples for inspiration on how the processing is generally written, though the inner part of the pixel processing is really up to the plug-in developer.
Bear in mind that it is more efficient to get all the values from the parameters before calling the processor multithread function. In order to do that we generally fetch all the values we want in the render function (or more specifically in the setupAndProcess function) and then give it in parameter to the processor class.
The getValue function of a parameter can be quite expensive and it's better to call it once if you can.
Same applies for the fetchImage function.
Walkthrough: The invert plug-in
In this part we will examine the source code of the Invert plug-in and comment the relevant parts.
First, let's take a look at the PluginRegistration.cpp file.
Any plug-in must register in the following function a static factory of the plug-in:
namespace OFX
{
namespace Plugin
{
void getPluginIDs(OFX::PluginFactoryArray &ids)
{
...
}
}
}
In the invert example, we made a function in the plug-in named getInvertPluginID, declared in Invert.h , defined in Invert.cpp and in that function we just instantiate our factory that we declared above:
void getInvertPluginID(OFX::PluginFactoryArray &ids)
{
static InvertPluginFactory p(kPluginIdentifier, kPluginVersionMajor, kPluginVersionMinor);
ids.push_back(&p);
}
The plug-in must be registered with its raw ID (the plug-in identifier for the host application) and its version. The host application can then sort the plug-ins by versions and try to avoid duplicates.
Now that we have that small function, we need to define the InvertPluginFactory.
The Factory
We declare the class in the .cpp file, using the handy macro that was created for that purpose:
mDeclarePluginFactory(InvertPluginFactory, {}, {}); |
|
|
That macro just declares the class and its virtual functions. The virtual functions are the actions that a host can call on the factory, they are:
#define mDeclarePluginFactory(CLASS, LOADFUNCDEF, UNLOADFUNCDEF) \
class CLASS : public OFX::PluginFactoryHelper<CLASS> \
{ \
public: \
CLASS(const std::string& id, unsigned int verMaj, unsigned int verMin):OFX::PluginFactoryHelper<CLASS>(id, verMaj, verMin){} \
virtual void load() LOADFUNCDEF ;\
virtual void unload() UNLOADFUNCDEF ;\
virtual void describe(OFX::ImageEffectDescriptor &desc); \
virtual void describeInContext(OFX::ImageEffectDescriptor &desc, OFX::ContextEnum context); \
virtual OFX::ImageEffect* createInstance(OfxImageEffectHandle handle, OFX::ContextEnum context); \
};
I won't go through the definition of those actions as the specification does a great job for that (just click on the links above). The 3 parameters of the factory declaring macro are the name of the factory, and 2 other arguments used whether you want to implement the load and unload actions or not.
In the general case you rarely need to implement those 2 actions, except if you need to do stuff right after the plug-in is loaded (application launch) and right before the plug-in is unloaded (application quit).
If you don't need to implement them, then do as the Invert example does, put an empty brace as parameter, indicating that the definition of the action is empty.
Otherwise, if you were to need to implement them, put a semi-colon ; as parameter and then define the functions below.
Let's take a look at the…
Create instance action:
OFX::ImageEffect* InvertPluginFactory::createInstance(OfxImageEffectHandle handle, OFX::ContextEnum /*context*/)
{
return new InvertPlugin(handle);
}
All it does is creating an instance of our plug-in class. We will cover the plug-in class later on when we're done with the factory.
The describe action:
Defines the relevant properties of our plug-in so that the host can use it in an appropriate manner:
void InvertPluginFactory::describe(OFX::ImageEffectDescriptor &desc)
{
// basic labels
desc.setLabels(kPluginName, kPluginName, kPluginName);
desc.setPluginGrouping(kPluginGrouping);
desc.setPluginDescription(kPluginDescription);
// add the supported contexts
desc.addSupportedContext(eContextFilter);
desc.addSupportedContext(eContextGeneral);
desc.addSupportedContext(eContextPaint);
// add supported pixel depths
desc.addSupportedBitDepth(eBitDepthUByte);
desc.addSupportedBitDepth(eBitDepthUShort);
desc.addSupportedBitDepth(eBitDepthFloat);
// set a few flags
desc.setSingleInstance(false);
desc.setHostFrameThreading(false);
desc.setSupportsMultiResolution(true);
desc.setSupportsTiles(true);
desc.setTemporalClipAccess(false);
desc.setRenderTwiceAlways(false);
desc.setSupportsMultipleClipPARs(false);
}
The labels of the plug-in are essentially a duplicate of the name of the plug-in but that is meant to be seen in different places in the user interface. In our case we use the plug-in name for all 3 labels (label,shortLabel,longLabel).
The plug-in grouping, is the name of the group in which this plug-in will be found. If you were to have the grouping "Filter" then in Natron you would find your plug-in under the Filter menu…
You can also specify subgroups, such as: "MyPluginGroup/Filters".
The plug-in description is generally what is seen in the help window the host offers for that plug-in. In Natron this is what the user sees when clicking on the ? button of the plug-in.
Now we must define the contexts we can support. You almost always want to support the general context, this is the context that offers maximum flexibility to your plug-in. The filter context is essentially different of the General context
because it allows only 1 non optional input clip and doesn't allow masks.
The paint context is rarely used but just says that if a mask were to be defined, then its name should not be "Mask" but "Brush". Some host application could then in the paint context
offer a different use interface. In Natron however we do not and the paint context is essentially treated as other contexts.
As a general rule, try to support as many contexts as you can, this is cheap to do and offers maximum portability across all available host applications.
The supported bit depths of the plug-in is important. This defines what kind of images you accept in input and what you can output. If you were say to support only byte (8bit) images then the host application would have to provide you 8bit input and output images. Note that some hosts (like Natron) use 32bit floating point images internally, and if you can this is better to support the highest bit depth. If you can't support 32bit images because the library you're using is 8bit (like OpenCV) then don't worry, the host application should normally be able to convert images from different bit depths on its own. You can check whether the application can do this pixel shuffling by checking the content of the global
ImageEffectHostDescription gHostDescription;
This struct offers many info related to the host currently invoking your plug-in and can be used to turn on/off special features of your plug-in that work only if the host application supports it.
Hopefully the support layer offers means (via templates of the processor class) to support all bit depths yourself.
The last remaining flags are very important as they define how the render action is called and the type of image and render window that should be used.
I'll just provide links to the original definition of these flags as they are very well described in the official spec:
kOfxImageEffectPluginPropSingleInstance (Defaults to false)
kOfxImageEffectPluginPropHostFrameThreading (Defaults to true)
kOfxImageEffectPropSupportsMultiResolution (Defaults to true)
kOfxImageEffectPropSupportsTiles (Defaults to true)
kOfxImageEffectPropTemporalClipAccess (Defaults to false)
kOfxImageEffectPluginPropFieldRenderTwiceAlways (Defaults to true)
kOfxImageEffectPropSupportsMultipleClipPARs (Defaults to false)
kOfxImageEffectPropSupportsMultipleClipDepths (Defaults to false)
kOfxImageEffectPluginRenderThreadSafety (Defaults to instance safe: any instance can have a single "render" call at any one time.)
The really tricky flags are:
- Multi-resolution: Do you support input and output images which have different region of definition ? In this case they can have arbitrary size (different) and the origin can be something else than 0,0
- Tiles: Do you support render windows that are different than the full region of definition of an image ? If true then the render window provided as parameter of the render action can be set to a rectangle smaller than the
actual region of definition of the image.
- render thread safety: The default value expects that your plug-in is thread-safe over several instances. In Natron one instance of your plug-in is a node. Generally this default value is good enough, unless you have some dirty global state that you maintain. The best you can do is to have a fully safe thread-safety, in which case several render threads can call the render action simultaneously. The host would then call your render function simultaneously in 2 different cases:
Because you set the host frame threading property to true and you're a fully-safe plug-in. In this case the host will slice up the render window by the amount of available threads and call as many parallel render as it needs.
Because your instance is referenced several times in the compositing graph and there's several render threads ongoing (for example in Natron this would happen if you were to have 2 viewers plugged to the same node).
Depending on all the properties that you defined in the describe action, this is of YOUR RESPONSIBILITY to check in the render action that the image and arguments provided correspond to all the properties you described.
If the image doesn't have a format you can exploit, then fail the render action by throwing the following exception:
OFX::throwSuiteStatusException(kOfxStatErrImageFormat);
You can also throw a kOfxStatFailed exception if the arguments aren't suited to your plug-in too. This is then the host responsibility to provide you arguments that are good enough for your plug-in, but catching errors ensures that your
plug-in doesn't crash the host application because the host did a mistake.
The describeInContext action:
In this function we define our clips (inputs/output) and the parameters of the plug-in.
void InvertPluginFactory::describeInContext(OFX::ImageEffectDescriptor &desc, OFX::ContextEnum context)
{
// Source clip only in the filter context
// create the mandated source clip
ClipDescriptor *srcClip = desc.defineClip(kOfxImageEffectSimpleSourceClipName);
srcClip->addSupportedComponent(ePixelComponentRGBA);
srcClip->addSupportedComponent(ePixelComponentRGB);
srcClip->addSupportedComponent(ePixelComponentAlpha);
srcClip->setTemporalClipAccess(false);
srcClip->setSupportsTiles(true);
srcClip->setIsMask(false);
// create the mandated output clip
ClipDescriptor *dstClip = desc.defineClip(kOfxImageEffectOutputClipName);
dstClip->addSupportedComponent(ePixelComponentRGBA);
dstClip->addSupportedComponent(ePixelComponentRGB);
dstClip->addSupportedComponent(ePixelComponentAlpha);
dstClip->setSupportsTiles(true);
if (context == eContextGeneral || context == eContextPaint) {
ClipDescriptor *maskClip = context == eContextGeneral ? desc.defineClip("Mask") : desc.defineClip("Brush");
maskClip->addSupportedComponent(ePixelComponentAlpha);
maskClip->setTemporalClipAccess(false);
if (context == eContextGeneral) {
maskClip->setOptional(true);
}
maskClip->setSupportsTiles(true);
maskClip->setIsMask(true);
}
// make some pages and to things in
PageParamDescriptor *page = desc.definePageParam("Controls");
{
OFX::BooleanParamDescriptor* param = desc.defineBooleanParam(kParamProcessR);
param->setLabels(kParamProcessRLabel, kParamProcessRLabel, kParamProcessRLabel);
param->setHint(kParamProcessRHint);
param->setDefault(true);
param->setLayoutHint(eLayoutHintNoNewLine);
page->addChild(*param);
}
{
OFX::BooleanParamDescriptor* param = desc.defineBooleanParam(kParamProcessG);
param->setLabels(kParamProcessGLabel, kParamProcessGLabel, kParamProcessGLabel);
param->setHint(kParamProcessGHint);
param->setDefault(true);
param->setLayoutHint(eLayoutHintNoNewLine);
page->addChild(*param);
}
{
OFX::BooleanParamDescriptor* param = desc.defineBooleanParam( kParamProcessB );
param->setLabels(kParamProcessBLabel, kParamProcessBLabel, kParamProcessBLabel);
param->setHint(kParamProcessBHint);
param->setDefault(true);
param->setLayoutHint(eLayoutHintNoNewLine);
page->addChild(*param);
}
{
OFX::BooleanParamDescriptor* param = desc.defineBooleanParam( kParamProcessA );
param->setLabels(kParamProcessALabel, kParamProcessALabel, kParamProcessALabel);
param->setHint(kParamProcessAHint);
param->setDefault(true);
page->addChild(*param);
}
ofxsPremultDescribeParams(desc, page);
ofxsMaskMixDescribeParams(desc, page);
}
First we define our clips. Your plug-in MUST have an output clip. Then depending in which context the describe action is called, you can define one or more input clips.
The input clips in Natron are seen from left to right in reverse order of the one you declared your clips. That is if you define your clip "MyInput1" and "MyInput2" in that order, Natron
would then instantiate the node this way:
MyInput2 MyInput1
\ /
\ /
----------------------------
| |
| MyEffect |
| |
----------------------------
If you're in the general context or paint context your effect can then also have a mask clip. Generally the mask is a clip that only supports alpha images. Don't forget to specify whether a clip is optional or not (an optional clip is then not mandatory to render) and to define which components your clip support.
After that you would basically declare all the parameters.
Generally you define a page parameter which will contain some other children parameters. A page parameter in Natron is represented as a tab in the settings panel of the node.
You can also create subgroups of parameters by defining a group parameter.
A parameter should always belong to a page. If you fail to put it into a page, then Natron by default will put it in some default tab (generally the "Node" tab).
Each parameter have a label (the visible label on the left hand side of the parameter in the settings panel) and a script name. This is important to have some sort of standard for naming.
Otherwise the user ends-up with a poorly aligned user interface and more importantly the scripting of the application gets messier because it is not easy to reference parameters in a script which have
space into their name!
Generally we use the following convention for naming parameters: (this is what can be used in a script)
myParameter1
myLongParameterWithMoreThan1Word
And this convention for naming labels: (this is what is seen in the user interface)
My parameter 1
My long parameter with more than 1 word
We also define macros for all these strings in the beginning of the .cpp file so that it makes it easier to make quick changes.
The hint of a parameter is the string that will be displayed in the tooltip when the user hovers with the mouse the parameter.
You should always set the default value for your parameter.
Almost all parameters animates by default: In Natron they will have the animation button on the right hand side. If you want to disable animation, you need to explicitly set the animation disabled.
The following parameters on the other hand do not animate by default : String,Boolean,Choice, you need to explicitly enable animation on them if you want them to animate.
The layout hint is a hint to the application as to whether the parameters should be on the same line or not. By default a new parameter makes a new line.
We're now ready and set to talk about the plug-in class...
The plug-in
Your class should inherit the OFX::ImageEffect class which gives you access to a bunch of virtual functions to implement. Those functions represent the actions the host can call on your plug-in.
All actions of the plug-in have a default behaviour except the render action and generally you don't need to implement all actions.
In the Invert plug-in, we only needed to implement 3 actions:
- The render action
- The isIdentity action
- the changed clip action
The constructor:
InvertPlugin(OfxImageEffectHandle handle)
: ImageEffect(handle)
, dstClip_(0)
, srcClip_(0)
{
dstClip_ = fetchClip(kOfxImageEffectOutputClipName);
assert(dstClip_ && (dstClip_->getPixelComponents() == ePixelComponentRGB || dstClip_->getPixelComponents() == ePixelComponentRGBA || dstClip_->getPixelComponents() == ePixelComponentAlpha));
srcClip_ = fetchClip(kOfxImageEffectSimpleSourceClipName);
assert(srcClip_ && (srcClip_->getPixelComponents() == ePixelComponentRGB || srcClip_->getPixelComponents() == ePixelComponentRGBA || srcClip_->getPixelComponents() == ePixelComponentAlpha));
maskClip_ = getContext() == OFX::eContextFilter ? NULL : fetchClip(getContext() == OFX::eContextPaint ? "Brush" : "Mask");
assert(!maskClip_ || maskClip_->getPixelComponents() == ePixelComponentAlpha);
_paramProcessR = fetchBooleanParam(kParamProcessR);
_paramProcessG = fetchBooleanParam(kParamProcessG);
_paramProcessB = fetchBooleanParam(kParamProcessB);
_paramProcessA = fetchBooleanParam(kParamProcessA);
assert(_paramProcessR && _paramProcessG && _paramProcessB && _paramProcessA);
_premult = fetchBooleanParam(kParamPremult);
_premultChannel = fetchChoiceParam(kParamPremultChannel);
assert(_premult && _premultChannel);
_mix = fetchDoubleParam(kParamMix);
_maskInvert = fetchBooleanParam(kParamMaskInvert);
assert(_mix && _maskInvert);
}
In the constructor, we fetch all clips and parameters that we previously defined in the factory.
At this point, if the plug-in would be instantiated from a project saved by a user with serialised values, all the parameters would already have their value restored by now. If you do not want a parameter to be persistent (i.e serialised) then call setIsPersistent(false) on the parameter descriptor in the describeInContext action.
The isIdentity action:
As a reminder, a plug-in is identity when it doesn't transform in any way the source image. In our case if the R,G,B,A parameters are checked off, then our plug-in doesn't do anything anymore.
So in the isIdentity action, we just check if at least one the parameters is checked. If not, then we can say we're identity of the input clip at the same time that was given in parameter.
bool
InvertPlugin::isIdentity(const IsIdentityArguments &args, Clip * &identityClip, double &/*identityTime*/)
{
bool red, green, blue, alpha;
double mix;
_paramProcessR->getValueAtTime(args.time, red);
_paramProcessG->getValueAtTime(args.time, green);
_paramProcessB->getValueAtTime(args.time, blue);
_paramProcessA->getValueAtTime(args.time, alpha);
_mix->getValueAtTime(args.time, mix);
if (mix == 0. || (!red && !green && !blue && !alpha)) {
identityClip = srcClip_;
return true;
} else {
return false;
}
}
Note here that we use the getValueAtTime() function on the parameters and not the getValue() function. We do this because those parameter animate and we want to fetch their exact value at the time given in parameter of the action.
If you were to call getValue() only, then the host would also call getValueAtTime() but it would take some overhead because it would need to also fetch the current time at which you called the function.
So as a general rule of thumbs:
- Call getValueAtTime for parameters that animate
- Call getValue for all parameters that do not animate
The changedClip action:
void
InvertPlugin::changedClip(const InstanceChangedArgs &args, const std::string &clipName)
{
if (clipName == kOfxImageEffectSimpleSourceClipName && srcClip_ && args.reason == OFX::eChangeUserEdit) {
switch (srcClip_->getPreMultiplication()) {
case eImageOpaque:
break;
case eImagePreMultiplied:
_premult->setValue(true);
break;
case eImageUnPreMultiplied:
_premult->setValue(false);
break;
}
}
}
In the Invert plug-in we have a special parameter that can be checked to unpremultiply the colour channels (RGB) by the Alpha channel before inverting the image.
This action is called when the user changes a connection of the plug-in: when the input arrow is connected to another node, this action handler will be called and you'll be able to know a bunch of information from the input clip.
In our case we query whether the input clip is a premultiplied image or not and set the value of the Unpremultiply parameter according to the input image state.
The render action:
void
InvertPlugin::render(const OFX::RenderArguments &args)
{
// instantiate the render code based on the pixel depth of the dst clip
OFX::BitDepthEnum dstBitDepth = dstClip_->getPixelDepth();
OFX::PixelComponentEnum dstComponents = dstClip_->getPixelComponents();
// do the rendering
if (dstComponents == OFX::ePixelComponentRGBA) {
switch (dstBitDepth) {
case OFX::eBitDepthUByte : {
ImageInverter<unsigned char, 4, 255> fred(*this);
setupAndProcess(fred, args);
}
break;
case OFX::eBitDepthUShort : {
ImageInverter<unsigned short, 4, 65535> fred(*this);
setupAndProcess(fred, args);
}
break;
case OFX::eBitDepthFloat : {
ImageInverter<float, 4, 1> fred(*this);
setupAndProcess(fred, args);
}
break;
default :
OFX::throwSuiteStatusException(kOfxStatErrUnsupported);
}
} else if (dstComponents == OFX::ePixelComponentRGB) {
switch (dstBitDepth) {
case OFX::eBitDepthUByte : {
ImageInverter<unsigned char, 3, 255> fred(*this);
setupAndProcess(fred, args);
}
break;
case OFX::eBitDepthUShort : {
ImageInverter<unsigned short, 3, 65535> fred(*this);
setupAndProcess(fred, args);
}
break;
case OFX::eBitDepthFloat : {
ImageInverter<float, 3, 1> fred(*this);
setupAndProcess(fred, args);
}
break;
default :
OFX::throwSuiteStatusException(kOfxStatErrUnsupported);
}
} else {
assert(dstComponents == OFX::ePixelComponentAlpha);
switch (dstBitDepth) {
case OFX::eBitDepthUByte : {
ImageInverter<unsigned char, 1, 255> fred(*this);
setupAndProcess(fred, args);
}
break;
case OFX::eBitDepthUShort : {
ImageInverter<unsigned short, 1, 65535> fred(*this);
setupAndProcess(fred, args);
}
break;
case OFX::eBitDepthFloat : {
ImageInverter<float, 1, 1> fred(*this);
setupAndProcess(fred, args);
}
break;
default :
OFX::throwSuiteStatusException(kOfxStatErrUnsupported);
}
}
}
In this function we instantiate the processor that will do the job with the good parameters.
We query the bit depth and the pixel components of the output clip and instantiate the processor with the template parameters according to the bit depth and the image components.
We then can call the setupAndProcess function which will continue setting up the render.
void
InvertPlugin::setupAndProcess(InvertBase &processor, const OFX::RenderArguments &args)
{
// get a dst image
std::auto_ptr<OFX::Image> dst(dstClip_->fetchImage(args.time));
if (!dst.get()) {
OFX::throwSuiteStatusException(kOfxStatFailed);
}
OFX::BitDepthEnum dstBitDepth = dst->getPixelDepth();
OFX::PixelComponentEnum dstComponents = dst->getPixelComponents();
// fetch main input image
std::auto_ptr<OFX::Image> src(srcClip_->fetchImage(args.time));
// make sure bit depths are sane
if (src.get()) {
OFX::BitDepthEnum srcBitDepth = src->getPixelDepth();
OFX::PixelComponentEnum srcComponents = src->getPixelComponents();
// see if they have the same depths and bytes and all
if (srcBitDepth != dstBitDepth || srcComponents != dstComponents) {
OFX::throwSuiteStatusException(kOfxStatErrImageFormat);
}
}
// auto ptr for the mask.
std::auto_ptr<OFX::Image> mask((getContext() != OFX::eContextFilter) ? maskClip_->fetchImage(args.time) : 0);
// do we do masking
if (getContext() != OFX::eContextFilter && maskClip_->isConnected()) {
// say we are masking
processor.doMasking(true);
// Set it in the processor
processor.setMaskImg(mask.get());
}
bool red, green, blue, alpha;
_paramProcessR->getValueAtTime(args.time, red);
_paramProcessG->getValueAtTime(args.time, green);
_paramProcessB->getValueAtTime(args.time, blue);
_paramProcessA->getValueAtTime(args.time, alpha);
bool premult;
int premultChannel;
_premult->getValueAtTime(args.time, premult);
_premultChannel->getValueAtTime(args.time, premultChannel);
double mix;
_mix->getValueAtTime(args.time, mix);
bool maskInvert;
_maskInvert->getValueAtTime(args.time, maskInvert);
processor.setValues(red, green, blue, alpha, premult, premultChannel, mix, maskInvert);
// set the images
processor.setDstImg(dst.get());
processor.setSrcImg(src.get());
// set the render window
processor.setRenderWindow(args.renderWindow);
// Call the base class process member, this will call the derived templated process code
processor.process();
}
First thing we do is fetching the output image in which we will render. If the pointer is NULL then we fail the render of course, there must be something terribly wrong in the host application for that to happen.
We then again fetch the bit depth and the pixel components of the output image.
Now we fetch the input image too. One thing we didn't do here but that we should have done is checking whether the clip is actually connected before fetching the image. Some hosts return garbage image when the clip is disconnected.
In Natron we return a NULL image, but the plug-in should always check whether a non-optional input clip is connected before fetching its image.
If the image returned is NULL then you can do 2 possible things: render black and transparent yourself in the processor (that's what we do in the invert plug-in) or fail the render and the host will probably render black on its own.
The next thing we do is checking that the input image and output image bit depths and components match. Remember that in the describe function we set the kOfxImageEffectPropSupportsMultipleClipPARs
and kOfxImageEffectPropSupportsMultipleClipDepths properties to false, indicating that we expect the input and output images to have the same properties.
Now depending on the context we can also fetch our mask. If the mask is NULL because it is not connected, this not an issue: this is an optional input and your processing code should take into account the fact that this mask might not exist.
We then fetch all the values from the parameters that will have effect on the processing code and pass them to the processor in the setValues(…) function that we created. getValue() and getValueAtTime() can be expensive, hence it is better to call them now and once than in the processor class which will be multi-threaded hence duplicating the API calls.
Don't forget to set the src and dst image pointers to the processor as well as the render window. At this point you're ready to call the process() function. This function will ask the host to launch multiple threads to render the code inside the processor class. This is a blocking call and will return only when all threads are finished rendering.
Note that it is FORBIDDEN to set values of parameters (i.e: calling setValue() and setValueAtTime()) in the render action. If you were to update some parameters after your render call then this is not the place to do so.
Reminder from the spec:
Setting Parameters
Plugins are free to set parameters in limited set of circumstances, typically relating to user interaction. You can only set parameters in the following actions passed to the plug-in's main entry function...
The Create Instance Action
The The Begin Instance Changed Action
The The Instance Changed Action
The The End Instance Changed Action
The The Sync Private Data Action
The processor
This class is actually decomposed in 2 classes:
- A base class, which is used by the plug-in in setupAndProcess and avoids templating everything where we pass the processor in arguments.
This base class only holds the setters and getters for parameters values and images pointers.
- A derived class which is templated by the bit depth (the pixel type here: unsigned char,unsigned short or float), the number of components and the maximum value for that bit depth.
The only relevant function in the processor is the…
multiThreadProcessImages function:
In the invert plug-in we do a special thing here: we template a new function named process() with the parameters of the plug-in (process red, process green , process blue, process alpha). The reason we do that is that it creates very well
optimised code thanks to the compiler.
So here the interesting function in the Invert example is actually the process function….
template<bool dored, bool dogreen, bool doblue, bool doalpha>
void process(const OfxRectI& procWindow)
{
float unpPix[4];
float tmpPix[4];
for (int y = procWindow.y1; y < procWindow.y2; y++) {
if (_effect.abort()) {
break;
}
PIX *dstPix = (PIX *) _dstImg->getPixelAddress(procWindow.x1, y);
for (int x = procWindow.x1; x < procWindow.x2; x++) {
const PIX *srcPix = (const PIX *) (_srcImg ? _srcImg->getPixelAddress(x, y) : 0);
// do we have a source image to scale up
ofxsUnPremult<PIX, nComponents, maxValue>(srcPix, unpPix, _premult, _premultChannel);
tmpPix[0] = dored ? (1. - unpPix[0]) : unpPix[0];
tmpPix[1] = dogreen ? (1. - unpPix[1]) : unpPix[1];
tmpPix[2] = doblue ? (1. - unpPix[2]) : unpPix[2];
tmpPix[3] = doalpha ? (1. - unpPix[3]) : unpPix[3];
ofxsPremultMaskMixPix<PIX, nComponents, maxValue, true>(tmpPix, _premult, _premultChannel, x, y, srcPix, _doMasking, _maskImg, _mix, _maskInvert, dstPix);
// increment the dst pixel
dstPix += nComponents;
}
}
}
This function takes in parameter for 1 thread what portion of the image it should render (procWindow). Typically the more threads the end-user computer has, the smaller the procWindow will be.
This function will be called by the host with new threads it has launched using the multi-thread suite. That's exactly what does the processor.process() function in the setupAndProcess code.
Make sure that the code in this function can be well multi-threaded and doesn't require much synchronisation overhead between the threads, in which case you would be better off doing that processing directly in the
render function without requiring launching new threads.
In this example we just loop over the scan lines first and the over each pixel in a scan-line.
At each loop, we check whether we should abort processing. If the function abort() returns true then we must cancel processing; this can be due to user interaction and generally returning the fastest possible ensures that the user can have responsive interaction and image renders.
We then use the ofxsUnPremult and ofxsPremultMaskMixPix to actually do the processing. These are functions that we created in the SupportExt repository which helps us to write processors. We created them because most of the processors across different plug-ins share a good amount of code and factorizing that into unique functions is easier to maintain. Plus improving one of the function, improves all plug-ins at once.
We're done here for the walkthrough of the Invert plug-in.