LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

For loop overflow number help

I have three bytes, their maximum should all be 255, when the first bit hits 256 it should flow into the second bit and change it to 1(which is does currently) and drop the first Byte down to 0 which it also does. But the second Byte goes past 255 when it reaches that high and Im not sure how to drop it back down to 0 once it hits that number. the third byte also should be capped at 255 but it keeps going as well.

0 Kudos
Message 1 of 5
(1,179 Views)

Can you save for previous, or at least show a picture.

 

A "bit" cannot reach 256. You seem to randomly mix bites and bytes. It helps to be concise!

What is the datatype? (e.g. U8 cannot go past 255!).

 

Where do the bytes come fmo? What makes them increment? What do they mean?

 

Can't you just increment a U32 and split it into the three LSBytes

0 Kudos
Message 2 of 5
(1,171 Views)

type-cast.png

 

I don't know why you need a loop that doesn't do anything but count up, but you can type cast a I32 to an array of 4 U8's to get the described behavior.

 

(do older stamps count?)

0 Kudos
Message 3 of 5
(1,158 Views)

I agree with @StevenD that the "solution" to your problem is to realize that your For Loop, which uses an I32 as the "N" value for the Loop Count, directs the "Index" (called "i" inside the Loop) to take on values from 0 to N-1.  Once you realize the (four) bytes of "i" are the values you want, you don't need to do any "arithmetic" on "i", you just need to break it apart into the three (or four) 8-bit U8 values (from 0 to 255) that comprise the number.  The TypeCast function will do this for you, but you need to make sure to understand the order of the Bytes in the Byte Array (is Array[0] the most or least significant bits?  not sure?  Write a little test code and let LabVIEW show you.]

 

Bob Schor

 

P.S. -- you shouldn't use Floats to temporarily store the values of Integers -- doesn't make logical sense.

0 Kudos
Message 4 of 5
(1,111 Views)

If you use the Typecast, the byte stream order is in LabVIEW always Big Endian, or Most significant Byte first, or what is also called network byte order as many well defined binary network protocols use it. It was what the CPU on the first Macs used and there are nowadays no mainstream computer systems that still use this. The big dominating CPU architecture from Intel only uses the opposite Little Endian architecture and while ARM could do both it usually is used in Little Endian mode too. Other past LabVIEW platforms that used Big Endian natively besides the Motorola MC68020 in the first Macs  were the PowerPC Macs and PowerPC VxWorks cRIOs, the Solaris for Sun Sparc, and the HP Unix for PA Risc versions. All of them are gone and history since many moons, although PowerPCs are still working in many old embedded devices like printers and network appliances, but they were not programmed in LabVIEW.

 

You can also use Split Number in this case. It is a bit more obvious what it does.

Rolf Kalbermatter
My Blog
0 Kudos
Message 5 of 5
(1,100 Views)