Based on the building blocks developed thus far, it was possible to assemble a typical media processing chain * two source nodes * one of these passes data through a filter * a mixer node on top to combine both chains * time-based automation for processing parameters As actual computation, hash-chaining on blocks of reproducible random data was used, allowing to verify for every data word that expected computations were carried out, in the expected order.
34 lines
538 B
Text
34 lines
538 B
Text
TESTING "Component Test Suite: Render Engine parts" ./test-suite --group=node
|
|
|
|
|
|
|
|
TEST "Proc Node basics" NodeBase_test <<END
|
|
END
|
|
|
|
|
|
TEST "Proc Node creation" NodeBuilder_test <<END
|
|
END
|
|
|
|
|
|
TEST "Proc Node test setup" NodeDevel_test <<END
|
|
END
|
|
|
|
|
|
TEST "Proc Node data feeds" NodeFeed_test <<END
|
|
END
|
|
|
|
|
|
TEST "Proc Node connectivity" NodeLink_test <<END
|
|
END
|
|
|
|
|
|
TEST "Proc Node metadata key" NodeMeta_test <<END
|
|
END
|
|
|
|
|
|
PLANNED "Proc Node operation modes" NodeOpera_test <<END
|
|
END
|
|
|
|
|
|
PLANNED "Proc Node engine storage setup" NodeStorage_test <<END
|
|
END
|