[ag-automation] application layer use cases for fieldbus stack

Robert Schwebel r.schwebel at pengutronix.de
Tue Jul 10 20:48:41 CEST 2007


On Tue, Jul 10, 2007 at 05:10:37PM +0200, Peter Soetens wrote:
> We're using these approaches to avoid using locks:
> 
> 1. Use callbacks to read bus
> This allows to read data from incoming TPUs in a thread-safe way, as 
> long as all the callbacks are serialised. This won't work in use cases 
> A and B though.

serializing the callbacks means that the multiplexer code calls one
callback function after the other?

> 2. Use FIFOs to write bus
> When you pv_set() new data, the data is pushed in a fifo, which 
> serialises all access, the fifo is emptied by the bus thread. This 
> works in all three cases.

How would you implement that fifo?

> 3. Use lock-free algorithms to share process images (= pv groups) among 
> threads. This works in all three cases, but you need to read 
> (-modify-write) the process image as a whole.

I'm wondering if this would also work with RCU-on-TPUs.

> The disadvantage of points 2 and 3 is more memory usage.
> 
> Now if you start locking on the pv group, I believe that you're very 
> close to point 3 as a usage pattern.

...

> > - Are the scenarios A-C above realistic for your applications?
> 
> They are realistic, but not recommended. In Orocos, we use DataObjects 
> to share process images among threads (case A) and Events (similar to 
> boost::signals) to serialise updates with a (periodic) thread (case B 
> and C).

The question was more focussed towards applications, not infrastructure.
I assume that the PLC people have a strictly cyclic approach (case A),
but I'm wondering what else people would want to do with the framework.

> > - Especially with regard to the "on change" use cases - how are the
> >  requirements in YOUR application?
> 
> We want to record the change without using locks in periodic and 
> reactive threads. The change is always pushed asynchronously to the 
> client. A mechanism is set up such that a thread checks its 'event 
> queue' and processes them (i.e. make a copy of the event data) before 
> or after its calculations. A thread has the choice to block on the 
> event queue or return immediately when empty.

I'd like to be able to push the data into the TPU buffers with zerocopy;
that would for example mean that, for a high speed ultrasound data
source, an interrupt would occur, the kernel driver arms a DMA and makes
sure it has direct access to the TPU buffer memory (for example with the
user pointer method used by V4L2). That would mean the transfer from
hardware into the user buffer would work without buring CPU cycles.

> > - On which "changes" would you like to be woken up? Per pv? Per TPU? Per
> >  pv_group? Why?
> 
> I believe most will answer per pv, but allowing to track multiple pvs 
> simultaneously.

I would expect that especially in your object oriented designs, an
control object would be interested in changes on it's pv group?

> I understand your use cases from a 'low level API' point of view, but
> I would not recommend them. If you really want these cases anyway, I
> would 'recommend' one big lock and take the performance hit it offers.
> That's the only way you'll be guaranteed on the safe side.

Well, some of the use cases will probably be a requirement by for
example the PLC people (like A), because their pattern works like this.

Robert
-- 
 Dipl.-Ing. Robert Schwebel | http://www.pengutronix.de
 Pengutronix - Linux Solutions for Science and Industry
   Handelsregister:  Amtsgericht Hildesheim, HRA 2686
     Hannoversche Str. 2, 31134 Hildesheim, Germany
   Phone: +49-5121-206917-0 |  Fax: +49-5121-206917-9



More information about the ag-automation mailing list