Temperature Sensor Encoding

I’m trying to integrate some custom AC control software with Velbus.

I see that the current temperature is sent onto the bus for many nodes with a built in sensor, such as the VMBGP4-20. Typically this will be by sending a message of type 0xE6 “COMMAND_SENSOR_TEMPERATURE”

The module protocol shows Databytes 2 and 3 transmitting the current temperature in two’s complement format.
[This is effectively the whole number of the current temp (in degrees C) is divided by the resolution (0.0625 degrees).]

There is a table showing the two bytes for various sample temps. There are a couple of typos which I found using the calculator at Two's (2s) Complement Calculator to understand how this works.

  1. Let’s start with the positive values. The values for 0.5, 0.25, 0.125 and 0,0625 are just the binary format of the multiples of the resolution i.e. 8, 4, 2, and 1 - being shifted 5 bits to the left. They are not two’s complement and require not complex maths.

  2. The next value is the one shown for 63.5 - this is 1016 times the resolution. The binary version of this is 0b0011 1111 1000 , shift this 5 bits to the left gives 0111 1111 0000 0000 which is not what is in the table.
    Taking the value in the table “0111 1111 111x xxxx” and shifting 5 bits to the right gives 0b0011 1111 1111 which is decimal 1023. 1023 x 0.0625 = 63.9375 degrees - close but just enough to be misleading.

  3. The negative values are the twos complement of the multiple with the sign removed.
    So for -0.0625 this is -1, which becomes 1 and the two complement is 111 1111 1111. Shift this 5 bits to the left and the values line up with the table.
    The same sort of thing works for -0.125 (-2 times resolution), -0.25 (-4 times resolution).

The value shown for -0.5 appears to be wrong. This is actually -8 times the resolution. The twos comp calculator shows 111 1111 1000 for 8. Shift this 5 bits to the left gives 0b 1111 1111 0000 0000.
The value shown in the table is “1111 1110 000x xxxx” (flip the bits and shift it right and and one) it becomes 0b0001 0000 which is actually 16 times the resolution, or -1.0 degrees.

For -55 (which is 880) the calculator shows 1001 0010 000. Shift this 5 bits to the left for 0b1001 0010 000x xxxx which is correct.

I think the negative values could go as high as the positive with a transmitted value of “1000 0000 000x xxxx” which can be translated back to -1023 (x 0.0625) that gives -69.9375 degrees.
Whether this would be of practical use apart from Arctic conditions is a moot point though :slight_smile:

Anyway I hope this helps people get their heads around how these temperature readings work (along with the small corrections required).

Can you add the table to have all info in one place? I want to see where is the error. I’d expect this for binary representation of real numbers, 4 bits for after the dot.

That’s a lot of number to compute and display.

I’ve just worked around the limits. Here’s some code if you want to play around.

bool vmbTemperatureFromSensor(byte outTemp[2], float sensorTemperature)
{
  float precision = 0.0625;
  }  
  // work out the multiple
  int16_t multiple = sensorTemperature / precision;
  
  // if positive return the multiple
  if (multiple >= 0)
  { 
    multiple = multiple << 5;
    outTemp[0] = multiple / 256;
    outTemp[1] = (multiple % 256);
    return true;
  } else {
    uint16_t twos = ((~(-multiple)) + 1) << 5;
    outTemp[0] = (twos >>8);
    outTemp[1] = (twos & 0xFF);
    return true;
  } 
  return false;
}

Call it in a loop ?

  uint8_t currentTemp[2];
  for (int i=-1023; i < 1024; i++) {
    if (vmbTemperatureFromSensor(currentTemp,  i))
    {
      // do something with currentTemp[0] and  currentTemp[1]
    }   
  }