Peter Nachtwey
Member
This is really a good question. Long ago before floating point chips were available we had only signed and unsigned integers.In which application is floating point math more reliable than integer math? ....
Our 2nd generation of motion controllers was based on Intel and AMD 80186s. This only had 16 bit integer math. So we would use two 16 bit words. One would represent integers and the other would represent fractions of integers. The fractions were expressed in fractions of 65536 so 1/2 was 32768 or 8000H. The fraction part was always unsigned where the high part could be signed or unsigned.
https://www.allaboutcircuits.com/te...sentation-the-q-format-and-addition-examples/
I also did a lot of DSP programming. one learns lots of tricks. These "tricks" resulted in much faster results than using a floating point library. The problem was that one had to be good at assembly language programming.
When floating point hardware arrived, programmers got lazy. However they still found the floating point had limits like adding 1 to a floating point number is not a good idea since the mantissa of a REAL is only 24 bits but using two 16 integers would handle this easily way beyond the 16.777M to over 4B.
Even now when we have 64+ bit floating point we are still careful. if you tell our motion controller to move to 10 meters, we offset the command so that all internal moves move to 0 where the precision is higher. Moving to 10m or 10,000mm would result in precision being lost.
You can be lazier with floating point but it can still bite you if you are not careful. This is evident as there are new threads on floating point every month or so on this forum. In college a took a class on numerical methods and part of was about how to reduce floating point errors.
Search for "Q15 format" and "fixed point format"