[ag-automation] application layer use cases for fieldbus stack

Peter Soetens peter.soetens at fmtc.be
Wed Jul 11 13:37:08 CEST 2007


On Tuesday 10 July 2007 20:48:41 Robert Schwebel wrote:
> On Tue, Jul 10, 2007 at 05:10:37PM +0200, Peter Soetens wrote:
> > We're using these approaches to avoid using locks:
> >
> > 1. Use callbacks to read bus
> > This allows to read data from incoming TPUs in a thread-safe way, as
> > long as all the callbacks are serialised. This won't work in use cases
> > A and B though.
>
> serializing the callbacks means that the multiplexer code calls one
> callback function after the other?

yes, and that it is not re-entering from a second thread.

>
> > 2. Use FIFOs to write bus
> > When you pv_set() new data, the data is pushed in a fifo, which
> > serialises all access, the fifo is emptied by the bus thread. This
> > works in all three cases.
>
> How would you implement that fifo?

I wasn't talking implementation yet here, I was talking usage patterns. 
As long as fifos are independent, you could use a lock, but Orocos has 
lock-free implementations of fifos. At the expense of memory.

> > > - Are the scenarios A-C above realistic for your applications?
> >
> > They are realistic, but not recommended. In Orocos, we use DataObjects
> > to share process images among threads (case A) and Events (similar to
> > boost::signals) to serialise updates with a (periodic) thread (case B
> > and C).
>
> The question was more focussed towards applications, not infrastructure.
> I assume that the PLC people have a strictly cyclic approach (case A),
> but I'm wondering what else people would want to do with the framework.

Some points in this discussion are not clear yet to me. Are you looking for 
a 'master' and 'slave' side API ? And what does 'LTI' stand for by the way ?

>
> > > - Especially with regard to the "on change" use cases - how are the
> > >  requirements in YOUR application?
> >
> > We want to record the change without using locks in periodic and
> > reactive threads. The change is always pushed asynchronously to the
> > client. A mechanism is set up such that a thread checks its 'event
> > queue' and processes them (i.e. make a copy of the event data) before
> > or after its calculations. A thread has the choice to block on the
> > event queue or return immediately when empty.
>
> I'd like to be able to push the data into the TPU buffers with zerocopy;
> that would for example mean that, for a high speed ultrasound data
> source, an interrupt would occur, the kernel driver arms a DMA and makes
> sure it has direct access to the TPU buffer memory (for example with the
> user pointer method used by V4L2). That would mean the transfer from
> hardware into the user buffer would work without buring CPU cycles.

This can only work if you serialise the read access of the TPU OR only one 
thread reads the contents of the TPU. In order to get that, you need to use 
callbacks to inform multiple interested parties.

>
> > > - On which "changes" would you like to be woken up? Per pv? Per TPU?
> > > Per pv_group? Why?
> >
> > I believe most will answer per pv, but allowing to track multiple pvs
> > simultaneously.
>
> I would expect that especially in your object oriented designs, an
> control object would be interested in changes on it's pv group?

You mean the 'drive' object ? Yes, I would guess so, and the user of the drive 
object is interested as well. But I thought it would be enough to enumerate 
the pv instances you are interested in and that 'groups' were just 
short-hands.

Peter

-- 
Peter Soetens -- FMTC -- <http://www.fmtc.be>


More information about the ag-automation mailing list