[ag-automation] application layer use cases for fieldbus stack

Dieter Hess d.hess at 3s-software.com
Tue Jul 10 18:01:03 CEST 2007


Hello,
Of course there is a implementation to always get a read lock of a TPU (it just needs memory). The only idea is that the TPU stays unchanged while reading it. To avoid write locks you must use fifos, but this makes no sense for automation applications, where I/O and processing is synchronized. Only one process/thread should write to a output TPU. 

Regards

Dieter Hess 

---------------------------------------------------- 
We software Automation. 

3S-Smart Software Solutions GmbH 
Dieter Hess 
Geschäftsführer 
Memminger Str. 151, DE-87439 Kempten 
Fon +49-831-54031-0, Fax +49-831-54031-50

Email: d.hess at 3s-software.com 
Web: http://www.3s-software.com 

Besuchen Sie das CoDeSys Internet-Forum unter http://forum.3s-software.com/

3S-Smart Software Solutions GmbH 
Geschäftsführer: Dipl.Inf.Dieter Hess, Dipl.Inf. Manfred Werner  
Handelsregister:  Kempten HRB 6186 
USt-IDNr. DE 167014915

-----Ursprüngliche Nachricht-----
Von: ag-automation-bounces at www.osadl.org [mailto:ag-automation-bounces at www.osadl.org] Im Auftrag von Peter Soetens
Gesendet: Dienstag, 10. Juli 2007 17:11
An: ag-automation at www.osadl.org
Betreff: Re: [ag-automation] application layer use cases for fieldbus stack

Quoting Robert Schwebel <r.schwebel at pengutronix.de>:

> Dear Colleagues,
>
> We yesterday had a brainstorming meeting at PTX and discussed the 
> requirements for the OSADL fieldbus stack. While we think we are 
> making progress in understanding how things have to be structured 
> there are still a lot of questions left which I'd like to discuss 
> here, especially with regard to application use cases. Note that this 
> is purely process data oriented for now - no async operations involved.
>
> Let's further assume we have an "application object" which is a super 
> simple motion controller, like this one:
>
> +----------------------------------+
> I              drive               I
> +----------------------------------+
> I +setSpeed(speed:uint32):int      I
> I +getPosition(out pos:uint32):int I
> +----------------------------------+
>
> The implementation of "drive" wants to be coded against the LDI, which 
> offers a process variable abstraction; let's assume a super simple one 
> (it isn't so simple in reality, but for the use cases let's forget 
> that for now):
>
> +--------------------+
> I         pv         I
> +--------------------I
> I -key               I
> I -value             I
> +--------------------+
> I +getValue(out val) I
> I +setValue(in val)  I
> +--------------------+
>
> Now let's assume that "drive" connects to two fieldbusses, one that 
> transports pv_speed to the real motor and one that transports the 
> measured pv_position from the encoder to the controller. So the stack 
> would put both of the pvs into a separate transportation unit (tpu), 
> one "out" tpu (speed) and one "in" tpu (position).
>
> So in summary, "drive" has it's personal "process image", not being 
> continuous in memory, but being two pv objects being spread over two 
> TPUs. We called these private process images "process variable groups".
>
> I'm now wondering which use cases happen inside of application objects.
> Here's a list of pseudocode examples which come to my mind:
>
> A) Locking
>
>   while (!endme) {
> 	pv_group_lock(&group);
> 	pv_get(&pv, &somewhere);
> 	pv_set(&pv, value);
> 	...
> 	pv_group_unlock(&group);
> 	sleep_until_next_period();
>   }
>
>   This is the simple PLC case, one thread/process, gaining access to
>   the pv_group, then reading/processing/writing, giving up access and
>   waiting until it's cycle is over. Other processes may also run, even
>   with different cycle times.

Why would you lock the group across busses ? I would expect individual locks for each bus, as the messages from different busses come in asynchronously (i.e. arbitrarily in time) anyway. The only thing you want to prevent is that variable 'v' is not being overwritten by an incomming message from v's bus (or is overwritten by another thread simultaneously).

>
> B) Blocking on Change
>
>   while (!endme) {
> 	pv_group_lock_on_pv_change(&group, &pv_position, timeout);
> 	/* we wake up when the TPU containing pv_position comes in */
> 	...
> 	pv_group_unlock(&group);
>
>   }
>
> C) Callback on Change
>
>   pv_group_callback_on_group_change(&group, function);
>
> Cases B) and C) are not so simple: when should we wake up? If one of 
> the included TPUs comes in? On every incoming TPU? How do we do the 
> locking in a race free way? Other concurrent processes may do TPU 
> locking in a different order. But from an application point of view, 
> we'd like to do the locking on the logical pv group, because that's 
> what the application wants to know.
>
> In past implementations of our libpv we have done a very simple "per 
> process image" locking: when entering the critical section, we've 
> locked the whole image. That's not very performant, but simple. Now, 
> thinking all this over, we came to the conclusion that we also want to 
> lock only the part of the process space which really needs locking.

I'd say, whatever you do, don't use locks. Your old approach (one big
lock) worked always but is not very performant, but that's about how far you can get safely with locks: adding more fine grained locks will improve performance but also lead to deadlocks or priority inversions sooner or later.
We're using these approaches to avoid using locks:

1. Use callbacks to read bus
This allows to read data from incoming TPUs in a thread-safe way, as long as all the callbacks are serialised. This won't work in use cases A and B though.

2. Use FIFOs to write bus
When you pv_set() new data, the data is pushed in a fifo, which serialises all access, the fifo is emptied by the bus thread. This works in all three cases.

3. Use lock-free algorithms to share process images (= pv groups) among threads. This works in all three cases, but you need to read
(-modify-write) the process image as a whole.

The disadvantage of points 2 and 3 is more memory usage.

Now if you start locking on the pv group, I believe that you're very close to point 3 as a usage pattern.

>
> And, there's another use case: streaming. For example, we have an 
> application which reads data from a local measurement system via 
> CANopen, collects the data, sends it over an EtherCAT line to a second 
> box and converts it to a second local CANopen bus. In such a situation 
> it is necessary to put all TPUs into a buffer queue and never lose one 
> of them, wereas the realtime requirements are more or less relaxed, as 
> long as one gets all TPUs. The same scenario is there for ultrasound
> measurement: a data logger gets data sets (TPUs) out of an FPGA, 
> queues it in a buffer line and pushes them onto a hard disc. That's no 
> different abstraction, and I'd like to solve it with the same data 
> structures.
>
> Now my questions:
>
> - Are the scenarios A-C above realistic for your applications?

They are realistic, but not recommended. In Orocos, we use DataObjects to share process images among threads (case A) and Events (similar to
boost::signals) to serialise updates with a (periodic) thread (case B and C).

>
> - Especially with regard to the "on change" use cases - how are the  
> requirements in YOUR application?

We want to record the change without using locks in periodic and reactive threads. The change is always pushed asynchronously to the client. A mechanism is set up such that a thread checks its 'event queue' and processes them (i.e. make a copy of the event data) before or after its calculations. A thread has the choice to block on the event queue or return immediately when empty.

>
> - On which "changes" would you like to be woken up? Per pv? Per TPU? 
> Per  pv_group? Why?

I believe most will answer per pv, but allowing to track multiple pvs simultaneously.

>
> - Can you explain further use cases?
>
> - For variables which are of "out" type (application -> fieldbus): are  
> there multiple threads which could want to write to a pv? If yes, is  
> there a need for inter-pv-locks, for example because a PID set of  
> controller parameters is only to be changed atomically?

This is an example of the 'read process image', modify it, and write it back as a whole.

I understand your use cases from a 'low level API' point of view, but I would not recommend them. If you really want these cases anyway, I would 'recommend' one big lock and take the performance hit it offers. 
That's the only way you'll be guaranteed on the safe side.

Peter

--
www.fmtc.be

_______________________________________________
ag-automation mailing list
ag-automation at lists.osadl.org
https://lists.osadl.org/cgi-bin/mailman/listinfo/ag-automation


More information about the ag-automation mailing list