Question How do I convert IPv6 into a 128bit number.

JamesBowen

19+ years progress programming and still learning.
I have a requirement to convert an IPv6 address into a number (128bit number).
Because it's a 128bit number I will have to store it in the database as a character field.

I found some javascript code which does what I need, but for something so simple I can't figure out what need to do to convert it to ABL Code.

JavaScript:
var ip = '2001:0db8:0:0:8d3:0:0:0';


// simulate your address.binaryZeroPad(); method
var parts = [];
ip.split(":").forEach(function(it) {
    var bin = parseInt(it, 16).toString(2);
    while (bin.length < 16) {
        bin = "0" + bin;
    }
    parts.push(bin);
})
var bin = parts.join("");

// Use BigInteger library
var dec = bigInt(bin, 2).toString();
console.log(dec);

This is what I have but it's not working as expected.

Code:
def var ipAddress as char.
def var part as raw  extent .
def var hexPart as character .
def var hexPartTemp as character .
def var partLoop as Integer.
def var base10 as character   .
def var ipv6length as Integer.

ipAddress =  '2001:0db8:0:0:8d3:0:0:0'.

ipv6length = num-entries(ipAddress, ":").

extent(part) = ipv6length.

do partLoop = 1 to ipv6length:

    hexPartTemp = entry(partLoop, ipAddress, ':' ).

    hexPart = "0000".
    OVERLAY(hexPart, 5 - length(hexPartTemp) ) = hexPartTemp.
 
    message hexPart.
    part[partLoop] = hex-decode( hexPart ).
    
    base10 = base10 + STRING( GET-UNSIGNED-SHORT( part[partLoop], 1) ).

end.

message base10.
 
I'm thinking you would be better off converting a byte array from System.Net.IPAddress to a number.

20-01-0D-B8-00-00-00-00-08-D3-00-00-00-00-00-00

I base64string-er-ated it in the example below just to show it being turned back into an IP address from bypte array that came from a string:

DEFINE VARIABLE strIP AS CHARACTER NO-UNDO.
DEFINE VARIABLE ipAddress AS System.Net.IPAddress NO-UNDO.
DEFINE VARIABLE ipAddress2 AS System.Net.IPAddress NO-UNDO.
DEFINE VARIABLE bBytes AS "System.Byte[]" NO-UNDO.
DEFINE VARIABLE bBytes2 AS "System.Byte[]" NO-UNDO.
DEFINE VARIABLE strBytes AS CHARACTER NO-UNDO.

strIP = '2001:0db8:0:0:8d3:0:0:0'.
ipAddress = System.Net.IPAddress:parse(strIP).
bBytes = ipAddress:GetAddressBytes().
MESSAGE
System.BitConverter:ToString(bBytes)
VIEW-AS ALERT-BOX.

strBytes = System.Convert:ToBase64String(bBytes).
bBytes2 = System.Convert:FromBase64String(strBytes).
ipAddress2 = NEW System.Net.IPAddress (bBytes2).

MESSAGE ipAddress2:ToString()
VIEW-AS ALERT-BOX.
 
The problem is that ABL decimals are not accurate to 128 bits, try:

Code:
message exp( 2, 127 ).

If you want to go down this road you will need two 64 bit integers.
 
There is possiblitly
The problem is that ABL decimals are not accurate to 128 bits, try:

Code:
message exp( 2, 127 ).

If you want to go down this road you will need two 64 bit integers.


Code:
170141183460469273745278895279919563358.9442406122
 
Hmm... it's a bit odd. ABL decimals are supposed to support 50 digits. If I just keep multiplying by two, the decimal behaves:

Code:
def var i as int.
def var de as decimal format '>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>.9' initial 1.

repeat:

    display
        i
        de
        exp( 2, i ) format '>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>.9'
        .
    assign
        i = i + 1
        de = de * 2       
        .   
end.

Not sure why the exp function starts misbehaving at exponent 70.

Update: And I now see that you are not even using a decimal but a character... oops.
 
As an easy example use :1 - your code returns 0256 which seems to indicate that your high and low bytes are swapped.

If you add:
Code:
    hexPart = substring( hexPart, 3, 2 ) + substring( hexpart, 1, 2 ).
after the overlay you should be ok.
 
Back
Top