[ag-automation] application layer use cases for fieldbus stack

Robert Schwebel r.schwebel at pengutronix.de
Tue Jul 10 16:34:47 CEST 2007


Dear Colleagues,

We yesterday had a brainstorming meeting at PTX and discussed the
requirements for the OSADL fieldbus stack. While we think we are making
progress in understanding how things have to be structured there are
still a lot of questions left which I'd like to discuss here, especially
with regard to application use cases. Note that this is purely process
data oriented for now - no async operations involved.

Let's further assume we have an "application object" which is a super
simple motion controller, like this one:

+----------------------------------+
I              drive               I 
+----------------------------------+
I +setSpeed(speed:uint32):int      I
I +getPosition(out pos:uint32):int I
+----------------------------------+

The implementation of "drive" wants to be coded against the LDI, which
offers a process variable abstraction; let's assume a super simple one
(it isn't so simple in reality, but for the use cases let's forget that
for now):

+--------------------+
I         pv         I
+--------------------I
I -key               I
I -value             I
+--------------------+
I +getValue(out val) I
I +setValue(in val)  I
+--------------------+

Now let's assume that "drive" connects to two fieldbusses, one that
transports pv_speed to the real motor and one that transports the
measured pv_position from the encoder to the controller. So the stack
would put both of the pvs into a separate transportation unit (tpu), one
"out" tpu (speed) and one "in" tpu (position).

So in summary, "drive" has it's personal "process image", not being
continuous in memory, but being two pv objects being spread over two
TPUs. We called these private process images "process variable groups".

I'm now wondering which use cases happen inside of application objects.
Here's a list of pseudocode examples which come to my mind:

A) Locking

   while (!endme) {
	pv_group_lock(&group);
	pv_get(&pv, &somewhere);
	pv_set(&pv, value);
	...
	pv_group_unlock(&group);
	sleep_until_next_period();
   }
       
   This is the simple PLC case, one thread/process, gaining access to
   the pv_group, then reading/processing/writing, giving up access and
   waiting until it's cycle is over. Other processes may also run, even
   with different cycle times.

B) Blocking on Change

   while (!endme) {
	pv_group_lock_on_pv_change(&group, &pv_position, timeout);
	/* we wake up when the TPU containing pv_position comes in */
	...
	pv_group_unlock(&group);

   }

C) Callback on Change

   pv_group_callback_on_group_change(&group, function);

Cases B) and C) are not so simple: when should we wake up? If one of the
included TPUs comes in? On every incoming TPU? How do we do the locking
in a race free way? Other concurrent processes may do TPU locking in a
different order. But from an application point of view, we'd like to do
the locking on the logical pv group, because that's what the application
wants to know.

In past implementations of our libpv we have done a very simple "per
process image" locking: when entering the critical section, we've locked
the whole image. That's not very performant, but simple. Now, thinking
all this over, we came to the conclusion that we also want to lock only
the part of the process space which really needs locking.

And, there's another use case: streaming. For example, we have an
application which reads data from a local measurement system via
CANopen, collects the data, sends it over an EtherCAT line to a second
box and converts it to a second local CANopen bus. In such a situation
it is necessary to put all TPUs into a buffer queue and never lose one
of them, wereas the realtime requirements are more or less relaxed, as
long as one gets all TPUs. The same scenario is there for ultrasound
measurement: a data logger gets data sets (TPUs) out of an FPGA, queues
it in a buffer line and pushes them onto a hard disc. That's no
different abstraction, and I'd like to solve it with the same data
structures.

Now my questions:

- Are the scenarios A-C above realistic for your applications?

- Especially with regard to the "on change" use cases - how are the
  requirements in YOUR application?

- On which "changes" would you like to be woken up? Per pv? Per TPU? Per
  pv_group? Why?

- Can you explain further use cases?

- For variables which are of "out" type (application -> fieldbus): are
  there multiple threads which could want to write to a pv? If yes, is
  there a need for inter-pv-locks, for example because a PID set of
  controller parameters is only to be changed atomically?

Thanks for your feedback.

Regards,
Robert Schwebel
-- 
 Dipl.-Ing. Robert Schwebel | http://www.pengutronix.de
 Pengutronix - Linux Solutions for Science and Industry
   Handelsregister:  Amtsgericht Hildesheim, HRA 2686
     Hannoversche Str. 2, 31134 Hildesheim, Germany
   Phone: +49-5121-206917-0 |  Fax: +49-5121-206917-9



More information about the ag-automation mailing list