This chapter provides a brief description of how to build the kernel and the supplied examples, and how to run those examples, using VxWorks 6.x kernel and the Workbench front end. For more information about VxWorks 6.x, please refer to WindRiver’s documentation.
The VxWorks kernel required to support Vortex OpenSplice on VxWorks 6.x is built using the development kernel configuration profile with the additional posix thread components enabled. A kernel based on this requirement can be built within Workbench, by starting the Workbench GUI and choosing File > New > VxWorks Image Project.
Type a name for the project then select the appropriate Board Support Package and Tool Chain (for example pcPentium4 and gnu).
Leave all of the kernel options to be used blank except for the SMP option, which must match the Vortex OpenSplice build you are working with.
On the Configuration Profile dialog choose PROFILE_DEVELOPMENT from the drop-down list.
Once the kernel configuration project has been generated, the additional required functionality can be enabled:
To successfully complete the C++ examples you will also require
Note that the Workbench GUI should be used to enable these components so that dependent components are automatically added to the project.
ADLINK provides the pingpong example both for C and C++ that are described in the Examples section. These example are provided in the form of Workbench projects which can be easily built and then deployed on to the target hardware in a similar process to that described above.
Each project contains a README file briefly explaining the example and the parameters required to run it.
The example projects can be imported into Workbench by choosing File > Import... > General > Existing Projects into Workspace.
In the Import Projects dialog, browse to the examples directory of the OpenSplice installation. Select the required projects for importing from the list that Workbench has detected.
Ensure that the Copy projects into workspace box is un-checked.
Projects in a workspace can be built individually or as a group.
Scenarios for building the OpenSplice examples
There are two included scenarios for build and deployment of the OpenSplice examples.
Step 1
Right-click on wb_sac_pingpong_kernel and then choose Rebuild Build Project.
Step 2
Next configure the targets to use the target server filesystem, mapped as on the target as /tgtsvr.
Step 3
Copy the newly-built wb_sac_pingpong_kernel/PENTIUM4gnu/sac_pingpong_kernel/Debug/sac_pingpong_kernel.out to the target server for each board as sac_pingpong_kernel.out.
Step 4
Open a target shell connection to each board and in the C mode shell run:
ld 1,0,"/tgtsvr/sac_pingpong_kernel.out"
ospl_spliced
Step 5
Open another target shell connection to one board and run:
pong "PongRead PongWrite"
Step 6
Open another target shell on the other board and run:
ping "100 100 m PongRead PongWrite"
ping "100 100 q PongRead PongWrite"
ping "100 100 s PongRead PongWrite"
ping "100 100 b PongRead PongWrite"
ping "100 100 f PongRead PongWrite"
ping "1 10 t PongRead PongWrite"
Step 1
Right-click on wb_sacpp_pingpong_kernel and then choose Rebuild Build Project.
Step 2
Next configure the targets to use the target server filesystem, mapped as on the target as /tgtsvr.
Step 3
Copy the newly-built wb_sacpp_pingpong_kernel/PENTIUM4gnu/sacpp_pingpong_kernel/Debug/sac_pingpong_kernel.out to the target server for each board as sacpp_pingpong_kernel.out.
Step 4
Open a target shell connection to each board and in the C mode shell run:
ld 1,0,"/tgtsvr/sacpp_pingpong_kernel.out"
ospl_spliced
Step 5
Open another target shell connection to one board and run:
pong "PongRead PongWrite"
Step 6
Open another target shell on the other board and run:
ping "100 100 m PongRead PongWrite"
ping "100 100 q PongRead PongWrite"
ping "100 100 s PongRead PongWrite"
ping "100 100 b PongRead PongWrite"
ping "100 100 f PongRead PongWrite"
ping "1 10 t PongRead PongWrite"
Step 1
Right-click on wb_sac_pingpong_kernel and then choose Rebuild Build Project.
Step 2
Next configure the targets to use the target server filesystem, mapped as on the target as /tgtsvr.
Step 3
Copy the newly-built wb_sac_pingpong_kernel/PENTIUM4gnu/sac_pingpong_kernel/Debug/sac_pingpong_kernel.out to the target server as sac_pingpong_kernel.out.
Step 4
Open a target shell connection and in the C mode shell run:
ld 1,0,"/tgtsvr/sac_pingpong_kernel.out"
ospl_spliced
Step 5
Open another target shell connection and run:
pong "PongRead PongWrite"
Step 6
Open another target shell and run:
ping "100 100 m PongRead PongWrite"
ping "100 100 q PongRead PongWrite"
ping "100 100 s PongRead PongWrite"
ping "100 100 b PongRead PongWrite"
ping "100 100 f PongRead PongWrite"
ping "1 10 t PongRead PongWrite"
Step 1
Right-click on wb_sacpp_pingpong_kernel and then choose Rebuild Build Project.
Step 2
Next configure the targets to use the target server filesystem, mapped as on the target as /tgtsvr.
Step 3
Copy the newly-built wb_sacpp_pingpong_kernel/PENTIUM4gnu/sacpp_pingpong_kernel/Debug/sacpp_pingpong_kernel.out to the target server as sacpp_pingpong_kernel.out.
Step 4
Open a target shell connection and in the C mode shell run:
ld 1,0,"/tgtsvr/sacpp_pingpong_kernel.out"
ospl_spliced
Open another target shell connection and run: pong “PongRead PongWrite”
Open another target shell and run:
ping "100 100 m PongRead PongWrite"
ping "100 100 q PongRead PongWrite"
ping "100 100 s PongRead PongWrite"
ping "100 100 b PongRead PongWrite"
ping "100 100 f PongRead PongWrite"
ping "1 10 t PongRead PongWrite"
The example builds by linking the object produced by compling the output of osplconf2c along with the example application, the splice deamon, and services enabled in the configuration XML, into one single downloadable kernel module. Users producing their own application could of course decide to link the object and library files into a monolithic kernel image instead.
NOTE for VxWorks kernel mode builds of OpenSplice the single process feature of the OpenSplice domain must not be enabled. i.e. “<SingleProcess>true</SingleProcess>” must not be included in the OpenSplice Configuration xml. The model used on VxWorks kernel builds is always that an area of kernel memory is allocated to store the domain database ( the size of which is controlled by the size option in the Database configuration for opensplice as is used on other platforms for the shared memory model. ) This can then be accessed by any task on the same VxWorks node.
Step 1
Right-click on wb_sac_pingpong_kernel_app_only for the C example or wb_sacpp_pingpong_kernel_app_only for C++, then choose Rebuild Project.
Step 2
Next configure the targets to use the target server filesystem, mapped on the target as /tgtsvr (use different host directories for each target).
Step 3
Copy the ospl.xml file from the distribution to the target server directories, and adjust for your desired configuration.
Step 4
Copy all the services from the bin directory in the distribution to the target server directories (for example, spliced.out, networking.out, etc.).
To run the examples on two targets, start the OpenSplice daemons on each target.
Step 5
Open a Host Shell (windsh) connection to each board, and in the C mode shell enter:
cd "<path to opensplice distribution>"
ld 1,0,"lib/libddscore.out"
ld 1,0,"bin/ospl.out"
os_putenv("OSPL_URI=file:///tgtsvr/ospl.xml")
os_putenv("OSPL_LOGPATH=/tgtsvr")
os_putenv("PATH=/tgtsvr/")
ospl("start")
Please note that in order to deploy the cmsoap service for use with the OpenSplice DDS Tuner, it must be configured in ospl.xml and the libraries named libcmxml.out and libddsrrstorage.out must be pre-loaded:
cd "<path to opensplice distribution>"
ld 1,0,"lib/libddscore.out"
ld 1,0,"lib/libddsrrstorage.out"
ld 1,0,"lib/libcmxml.out"
ld 1,0,"bin/ospl.out"
os_putenv("OSPL_URI=file:///tgtsvr/ospl.xml")
os_putenv("OSPL_LOGPATH=/tgtsvr")
os_putenv("PATH=/tgtsvr/")
ospl("start")
Step 6
To load and run the examples:
ld 1,0,"lib/libdcpsgapi.out"
ld 1,0,"lib/libdcpssac.out"
cd "examples/dcps/PingPong/c/standalone"
ld 1,0,"sac_pingpong_kernel_app_only.out"
Step 7
Open a new Host Shell connection to one board and run:
pong "PongRead PongWrite"
Step 8
Open another new Host Shell on the other board and run:
ping "100 100 m PongRead PongWrite"
ping "100 100 q PongRead PongWrite"
ping "100 100 s PongRead PongWrite"
ping "100 100 b PongRead PongWrite"
ping "100 100 f PongRead PongWrite"
ping "1 10 t PongRead PongWrite"
Proceed as described in the section above, but make all windsh connections to one board, and only load and run ospl once.
Loading spliced and its services may take some time if done exactly as described above. This is because the service DKMs (Downloadable Kernel Modules) and entry points are dynamically loaded as required by OpenSplice.
On startup, OpenSplice will attempt to locate the entry point symbols for the services and invoke them. This removes the need for the dynamic loading of the DKMs providing the symbols, and can equate to a quicker deployment. Otherwise, OpenSplice will dynamically load the service DKMs.
For example, for an OpenSplice system that will deploy spliced with the networking and durability services, the following commands could be used:
cd "<path to opensplice distribution>"
ld 1,0,"lib/libddscore.out"
ld 1,0,"bin/ospl.out"
ld 1,0,"bin/spliced.out"
ld 1,0,"bin/networking.out"
ld 1,0,"bin/durability.out"
os_putenv("OSPL_URI=file:///tgtsvr/ospl.xml")
os_putenv("PATH=/tgtsvr/bin")
os_putenv("OSPL_LOGPATH=/tgtsvr")
ospl("start")
The ospl-info.log file records whether entry point symbols were pre-loaded, or a DKM has been loaded.
In this scenario osplconf2c has been used with the -x and -d options to create an empty configuraion which allows dynamic loading. The resulting object has been included in the supplied libddsos.out. If desired, the end user could create a new libddsos.out based on libddsos.a and a generated file from osplconf2c without the -x option, in order to statically link some services, but also allow dynamic loading of others if the built-in xml is later overridden using a file URI. (See Overriding OpenSplice configuration at runtime.)
osplconf2c is required for example and user applications. osplconf2c is a tool which processes the OpenSplice configuration XML, and produces a source file to be compiled and linked into the final image. It contains the data from the XML file, as well as any environment variables that you require to configure OpenSplice and references to the symbols for the entry points of the OpenSplice services.
Environment variables can be added using the -e option. For example, you would use the -e "OSPL_LOGPATH=/xxx/yyy" option if you wanted the logs to be placed in /xxx/yyy.
osplconf2c is run automatically by the example projects.
You can override the OpenSplice configuration XML provided to osplconf2c at runtime by specifying the URI of a file when starting ospl_spliced on the target; for example: ospl_spliced "file:///tgtsvr/ospl.xml"
Usage
osplconf2c -h
osplconf2c [-u <URI>] [-e <env=var> ]... [-o <file>]
Options