Timer Accuracy: TON vs Periodic Scan Counting (RSLogix 5000)

dcooper33

Lifetime Supporting Member + Moderator
Join Date
Jun 2011
Location
Rogers, AR
Posts
717
Got into a bit of a debate the other day, and wanted to get some gurus' opinion/expertise on the subject.

When duration of an output's ON (or OFF) time is critical, and that control is done in a fast periodic task (say 2.0ms), is it "better" to use a TON within the task, or to count scans that the output is on, and increment the count by the period each time?

We have a roll-yer-own control algorithm that is periodically adjusting the pre-set on time of an injector solenoid based on flow-meter feedback. My contention was that not only is a TON simpler for a technician to understand, but that the potential error is always less, as the error can only ever be in one direction (actual time > preset time), and that max error <= max elapsed time between task triggers. If you are counting scans, then the max error is still subject to elapsed time errors on the "final" scan, but that method is also subject to cumulative errors on the "duration" scans. Of course we all know that a 2ms period is not always 2.0. It is probably going to be extremely close most of the time, but of course there is no guarantee that 2.0 doesn't occasionally take 3.25, or 1.6. Then there is overlap. Everytime you have an overlap while the solenoid is on, then you add >= Task Period to your error.

Now the other argument is that the PLC "averages" the time between periodic tasks so that it very nearly equals the nominal time, but I've never read any literature on the matter, so I don't know what kind of time-base we are talking about here.

So my feeling is that overall there is much more uncertainty with the scan-counting method, rather than using the real-time clock built into a TON. But I know a lot of programmers consider scan-counting a "best practice", some of whom I highly respect. So I'd like to hear some other perspectives on the matter. What do you guys use? What are some pros/cons of both approaches?

Cheers,
Dustin

🍻
 
using a pulse count method IE the "S" registers is the best method because it uses the actual CPU pulses.
scan speed does not come into the formulae.
normally I find the TON method etc has no problems - I just keep the posability in mind - so if needed I will use it
 
I can't say for certain I know the answer to your question (although I can say I am not a fan of scan counting for the reasons you mention), but I did want to throw out a 3rd option.

Usually when I need ultra-precise timing I read the system clock and base the timing off that. When the timer starts just read the system clock at that time and store it in a value called "start_time". Then add what should be the length of that time to the start time and call it "end_time". Now just keep polling the system clock and compare it to the end time. Depending on your approach, you will even be able to determine how far you ran over on that time and apply that correction to the next one. In this way you can practically eliminate accumulated error. With use of events and immediate IO updates you can probably achieve very precise timing of a physical output. This also gets you around the timing errors associated with a re-cycling TON instruction and actually gives you better resolution.
 
I can't say for certain I know the answer to your question (although I can say I am not a fan of scan counting for the reasons you mention), but I did want to throw out a 3rd option.

Usually when I need ultra-precise timing I read the system clock and base the timing off that. When the timer starts just read the system clock at that time and store it in a value called "start_time". Then add what should be the length of that time to the start time and call it "end_time". Now just keep polling the system clock and compare it to the end time. Depending on your approach, you will even be able to determine how far you ran over on that time and apply that correction to the next one. In this way you can practically eliminate accumulated error. With use of events and immediate IO updates you can probably achieve very precise timing of a physical output. This also gets you around the timing errors associated with a re-cycling TON instruction and actually gives you better resolution.

I like that method, too. I have used something similar in the past for tracking runtime minutes throughout the course of a 12hr shift, accumulating a 60 million pulse count of system-time uSecs, and then rolling over anything > 60 million to the next minute count. Worked great for that app, but that was in a slow task where I didn't mind the unconditional GSV.
If I understand correctly, a TON is doing the same thing in regards to system time between scans, but you only have mSec resolution rather than uSec, right?
 
If I understand correctly, a TON is doing the same thing in regards to system time between scans, but you only have mSec resolution rather than uSec, right?

The resolution is one thing. I would also expect to lose one scans worth of time every time you reset the timer. Whther that is acceptable or not I guess is dependant on what you are doing. Reading the clock and processing manually gives you more control and is more fool proof, but definitely more convoluted. But if I had to choose between either TON or RTC as opposed to counting scans I would choose either of the former.
 
If you choose to use the sharpest tool in the box, follow OG thread.
If you choose to use the proven method, don't reset the timer, when DN, subtract the pre from the acc thereby preserving any lost milliseconds, then otu the dn. Be sure to condition check to prevent a negative timer preset from resulting.

This is the classic A/B way to preserve long term accuracy for for things like self "resetting" interval/cycle timers using the built in TON.
 
Last edited:
Thanks for the suggestions guys. I'm going to look more into the CST object, seems like the best tool for short-duration time control. Never used it before, so this sounds like a good chance to get acquainted. :)
The roll-yer-own GSV timer is great for self-resetting cyclic stuff, as is Okie's method of TON rollover. Getting back to the original point, all the above methods appear to be superior to scan-counting, which is what I suspected. I guess some guys will always try to defend what they have been taught (or what they have taught themselves).
Thanks again for the good ideas!

Cheers,
Dustin

🍻
 
I think we need to be careful not to just assume that all GSV calls are equal.

An unconditional GSV for the current value of the clock will cost you 4 to 12 microseconds per scan depending on your processor. On an L71 it is 4.63uS, on L16-ER it is only 6.78.

Yes, if you start accessing the date or things like that then processing time can balloon to 100+uS. The GSV execution times are all over the map. You really just have to treat each Attribute individually. From my standpoint I wouldn't let 7uS change my approach unless I had a really good reason.
 

Similar Topics

Hello! How accurate would someone say a timer is in a L71 PLC? I'm tasked with displaying a total water pumped using a flow meter over X time...
Replies
17
Views
5,682
Hello guys, I understand that Siemens IEC timers update asynchronous to the program cycle. While the Time value is stored in milliseconds, I'm...
Replies
3
Views
2,324
I have some 1000ms TON instructions in a ControLogix L62 than on .DN totalize a flow reading. We noticed that the result is only about 75% of...
Replies
5
Views
5,511
Firstly Ive not been using this page for long and this is my first post. Hello all and thankyou for the answers ive found here upto yet. My...
Replies
10
Views
6,094
We have a contract part of which involves totalising ampere hours and switching dosing pumps on at a certain value of A/H. I need to log the value...
Replies
9
Views
1,963
Back
Top Bottom