The scan cycle is 1 ms, and the resolution of the system time is in multiples of 100 ns. ...
1) From the Beckhoff documentation, when the posted code converts the 100ns SYSTEMTIME to string format, the resolution drops to 1ms (see
here), so the time has lost resolution, to 1ms, when it converts back to LREAL for the rate calculation.
2) Since the
fbGetSystemTime call is issued in the 1ms scan cycle, it is retrieving the current time of the first 1ms scan cycle
after the rising edge,
not the current time of the rising edge itself, so even if the code was refactored to use a time format that maintained the 100ns resolution, the resolution of the measurement would still be 1ms.
The only way to get (near) 100ns time resolution between two consecutive rising edges would be with an interrupt routine triggered by the rising edge of the input of interest; even then the actual time resolution woudl be dependent on the input hardware's capability i.e. response time.
Another approach would be to sample the time of many consecutive rising edges, and perform a least-squares fit of those data to a line: the slope of that line would be the mean interval per pulse. Even at the scan cycle time resolution of 1ms, the multiple samples would in effect "beat down" the sampling alias noise and yield a resolution better than 1ms; IIRC, the noise reduction is something like the reciprocal of the square root of the number of samples in the fit, but don't quote me on that. This approach also applies a time filter to the measurement, which means this approach pays for the reduced noise by responding more slowly to changes in rate, which would affect the PID parameters used to tune the system.