Compare commits
956 Commits
Author | SHA1 | Date |
---|---|---|
Mark Qvist | ba2feaa211 | |
Mark Qvist | 097d2b0dd9 | |
Mark Qvist | bb0ce4faca | |
Mark Qvist | 5915228f5b | |
Mark Qvist | 0b66649158 | |
markqvist | e28dd6e14a | |
markqvist | 0a15b4c6c1 | |
markqvist | 62db09571d | |
Mark Qvist | 444ae0206b | |
Mark Qvist | 4b07e30b9d | |
markqvist | 583e65419e | |
liamcottle | 1564930a51 | |
markqvist | b81b1de4eb | |
jacob.eva | 746a38f818 | |
jacob.eva | c230eceaa6 | |
Mark Qvist | 09d9285104 | |
faragher | 3551662187 | |
faragher | f7f34e0ea3 | |
Mark Qvist | 43fc2a6c92 | |
Mark Qvist | b17175dfef | |
Mark Qvist | 1103784997 | |
Mark Qvist | d2feb8b136 | |
Mark Qvist | f595648a9b | |
Mark Qvist | b06f5285c5 | |
Mark Qvist | 8330f70a27 | |
Mark Qvist | 15e10b9435 | |
Mark Qvist | b91c852330 | |
Mark Qvist | 75acdf5902 | |
Mark Qvist | dae40f2684 | |
Mark Qvist | 4edacf82f3 | |
markqvist | 4b0a0668a5 | |
markqvist | a52af17123 | |
Mark Qvist | 0b0a3313c5 | |
markqvist | 34af2e7af7 | |
Jürg Schulthess | 12bf7977d2 | |
Jürg Schulthess | b69b939d6f | |
Jürg Schulthess | b5556f664b | |
Jürg Schulthess | f804ba0263 | |
Jürg Schulthess | 84a1ab0ca3 | |
markqvist | 465695b9ae | |
Mark Qvist | a999a4a250 | |
nothingbutlucas | cbb5d99280 | |
Mark Qvist | 64f5192c79 | |
Mark Qvist | d223ebc8c0 | |
markqvist | c28f413fe6 | |
Kevin Brosius | 92e5f65887 | |
Mark Qvist | b977f33df6 | |
Mark Qvist | 589fcb8201 | |
Mark Qvist | e5427d70ac | |
Mark Qvist | 2f5381b307 | |
Thiaguetz | 11baace08d | |
Mark Qvist | a4d5b5cb17 | |
Mark Qvist | 9cb181690e | |
markqvist | ff6604290e | |
markqvist | 2dbd3cbc0f | |
markqvist | 2a11097cac | |
markqvist | c0e3181ae3 | |
markqvist | 5a0316ae7f | |
Mark Qvist | 177bb62610 | |
Mark Qvist | 7cd3cde398 | |
Mark Qvist | 29bdcea616 | |
Mark Qvist | d9460c43ad | |
markqvist | fb02e980db | |
Mark Qvist | 4947463440 | |
Chad Attermann | 5565349255 | |
Chad Attermann | 1b7b131adc | |
Mark Qvist | ace0d997d4 | |
Mark Qvist | 798c252284 | |
Mark Qvist | 7da22c8580 | |
Mark Qvist | eefbb89cde | |
Mark Qvist | 18f50ff1ae | |
Mark Qvist | 05e97ac0db | |
Mark Qvist | c2c3a144d2 | |
markqvist | ea369015ee | |
markqvist | 9745842862 | |
markqvist | 246289c52d | |
markqvist | ff71cb2f98 | |
Mark Qvist | 5ca1ef1777 | |
Mark Qvist | 2b764b4af8 | |
Mark Qvist | a62843cd75 | |
Mark Qvist | 633435390d | |
Mark Qvist | 1e207ef972 | |
Mark Qvist | 35e9a0b38a | |
Mark Qvist | 3d7f3825fb | |
Mark Qvist | 04b67a545d | |
Mark Qvist | 61c2fbd0da | |
Mark Qvist | 1aba4ec43a | |
markqvist | 841a3daa26 | |
jacob.eva | d98f03f245 | |
Mark Qvist | 878e67f69d | |
Mark Qvist | e582a6d6d1 | |
Mark Qvist | a948afb816 | |
Mark Qvist | 86a294388f | |
Mark Qvist | 429a0b1bd3 | |
Mark Qvist | ee8bb42633 | |
Mark Qvist | c659388a2c | |
markqvist | eaa8199988 | |
jacob.eva | 4f890e7e8a | |
Mark Qvist | a37e039424 | |
Mark Qvist | 8e1e2a9c54 | |
Mark Qvist | e4f94c9d0b | |
Mark Qvist | b007530123 | |
Mark Qvist | 4066bba303 | |
Mark Qvist | 8951517d01 | |
Mark Qvist | ae1d962b9b | |
Mark Qvist | a2caa47334 | |
Mark Qvist | 9f43da9105 | |
Mark Qvist | 038c696db9 | |
Mark Qvist | 8fa6ec144c | |
Mark Qvist | a8ccff7c55 | |
markqvist | a5783da407 | |
Juraj Bednar | bec3cee425 | |
Mark Qvist | b15bd19de5 | |
Mark Qvist | 38390fd021 | |
Mark Qvist | 40e0eee64f | |
Mark Qvist | af4cbb1baf | |
Mark Qvist | d3f4192fe3 | |
Mark Qvist | 47ef62ac11 | |
Mark Qvist | d15ddc7a49 | |
Mark Qvist | d67c8eb1cd | |
Mark Qvist | f4de5d5199 | |
Mark Qvist | 34e42988ea | |
Mark Qvist | 81d5d41149 | |
Mark Qvist | 6b3f3a37f0 | |
Mark Qvist | 60a604f635 | |
Mark Qvist | 55a2daf379 | |
Mark Qvist | 2dbde13321 | |
Mark Qvist | 6620dcde6b | |
Mark Qvist | 60966d5bb1 | |
Mark Qvist | ea22a53bf2 | |
Mark Qvist | 7b9526b4ed | |
Mark Qvist | 676074187a | |
Mark Qvist | 5dd2c31caf | |
Mark Qvist | 2db400a1a0 | |
Mark Qvist | b68dbaf15e | |
Mark Qvist | 84febcdf95 | |
Mark Qvist | c972ef90c8 | |
Mark Qvist | 19a74e3130 | |
Mark Qvist | 5ba789f782 | |
Mark Qvist | 58b5501e17 | |
Mark Qvist | b584832b8f | |
Mark Qvist | fc0cf17c4d | |
Mark Qvist | 001dd369ec | |
Mark Qvist | 9ce2ea4a5c | |
Mark Qvist | eec8814c22 | |
Mark Qvist | 7a6ed68482 | |
Mark Qvist | cd9e23f2de | |
Mark Qvist | ffa84de0bc | |
Mark Qvist | 89d3cdba17 | |
Mark Qvist | 2ba5843f22 | |
Mark Qvist | c4d0f08767 | |
Mark Qvist | db1cdec2a2 | |
Mark Qvist | 1eea1a6a22 | |
Mark Qvist | 4a69ce5a98 | |
Mark Qvist | 8d653cba9b | |
Mark Qvist | a6126a6bc5 | |
Mark Qvist | 957c2b3bc1 | |
Mark Qvist | 494bde4e79 | |
Mark Qvist | 5e39136dff | |
Mark Qvist | 4b26a86a73 | |
Mark Qvist | 43a6e280c0 | |
Mark Qvist | 237a45b2ca | |
Mark Qvist | b161650ced | |
Mark Qvist | 24975eac31 | |
Mark Qvist | 5d1ff36565 | |
Mark Qvist | 628777900e | |
Mark Qvist | 12e87425dc | |
Mark Qvist | 873f049e20 | |
Mark Qvist | 2ea963ed03 | |
Mark Qvist | 1d1276d6dd | |
Mark Qvist | 83741724b0 | |
Mark Qvist | a4143cfe6d | |
Mark Qvist | 3d645ae2f4 | |
Mark Qvist | 5ba125c801 | |
Mark Qvist | badb392898 | |
Mark Qvist | c0e1ce8d86 | |
markqvist | 0bc248c5e4 | |
Mark Qvist | 798dfb1727 | |
Mark Qvist | a451b987aa | |
Mark Qvist | f01074e5b8 | |
Mark Qvist | 0e12442a28 | |
Jürg Schulthess | a4e8489a34 | |
Jürg Schulthess | 276b6fbd22 | |
Jürg Schulthess | 52ab08c289 | |
Mark Qvist | 38236366cf | |
Mark Qvist | af3cc3c5dd | |
Mark Qvist | 35ed1f950c | |
Mark Qvist | c050ef945e | |
Mark Qvist | bed71fa3f8 | |
Mark Qvist | cf125daf5c | |
Mark Qvist | 9f425c2e8d | |
Mark Qvist | 0dc78241ac | |
Mark Qvist | 01e963e891 | |
Mark Qvist | b3731524ac | |
Mark Qvist | 67c7395ea7 | |
Mark Qvist | fddf36a920 | |
Mark Qvist | 4f561a8c0c | |
Mark Qvist | 778d6105c1 | |
Mark Qvist | 60c94dc9b6 | |
Mark Qvist | f71395e449 | |
Mark Qvist | 1abacca9bf | |
Mark Qvist | 40281d5403 | |
Mark Qvist | e0da489156 | |
Mark Qvist | 2dcf1350e7 | |
Mark Qvist | 1e280611ce | |
Mark Qvist | f1d107846f | |
Mark Qvist | cc951dcb53 | |
Mark Qvist | b5856a3706 | |
Mark Qvist | ed3479da9a | |
Mark Qvist | 5e15f421b7 | |
Mark Qvist | 0a9366ba6e | |
Mark Qvist | cf31435f39 | |
Mark Qvist | 9f58860842 | |
Mark Qvist | 875348383d | |
Mark Qvist | f79f190525 | |
Mark Qvist | 5e27a81412 | |
Mark Qvist | 0dcb009579 | |
Mark Qvist | 943f76804b | |
Mark Qvist | 8bbe6ae3ae | |
Mark Qvist | f0d85dd078 | |
Mark Qvist | f85dda1829 | |
markqvist | 91e064cdf1 | |
Mark Qvist | fb4e53f6e3 | |
Mark Qvist | 03340ed091 | |
Mark Qvist | ed424fa0a2 | |
Mark Qvist | 406ab216d1 | |
Mark Qvist | 00d8a2064d | |
Mark Qvist | 38b920e393 | |
Mark Qvist | 1ed000c4d9 | |
Mark Qvist | d360958d10 | |
Mark Qvist | fcdb455d73 | |
Mark Qvist | 575639b721 | |
Mark Qvist | 492573f9fe | |
Mark Qvist | c5d30f8ee6 | |
Mark Qvist | 3c4791a622 | |
Mark Qvist | 803a5736c9 | |
Mark Qvist | 267ffbdf5f | |
Mark Qvist | 52028aa44c | |
Mark Qvist | c5248d53d6 | |
Mark Qvist | 2d2f0947ac | |
Mark Qvist | 4fa616a326 | |
Mark Qvist | 136713eec1 | |
Mark Qvist | 0fd75cb819 | |
Mark Qvist | ea52153969 | |
Mark Qvist | 3854781028 | |
Conner Vieira | ec2805f357 | |
Mark Qvist | b5cb3a65dd | |
Mark Qvist | c79cb3aa20 | |
Mark Qvist | 8bff119691 | |
Mark Qvist | 5e0b2c5b42 | |
Mark Qvist | 8908022b88 | |
Mark Qvist | b0dda0ed86 | |
Mark Qvist | 6ae72d4225 | |
Mark Qvist | 0a188a2d39 | |
Mark Qvist | 036abb28fe | |
Mark Qvist | a732767a28 | |
Mark Qvist | 32a1261d98 | |
Mark Qvist | 27c5af3bbc | |
Mark Qvist | 5872108da3 | |
Mark Qvist | 8f6c6b76de | |
Mark Qvist | 99db625c62 | |
Mark Qvist | fdf6a31cbd | |
Mark Qvist | 75f353d7e2 | |
Mark Qvist | 82f204fb44 | |
Mark Qvist | 8d4492ecfd | |
Mark Qvist | f8a53458d6 | |
Mark Qvist | 4229837170 | |
Mark Qvist | 4be2ae6c70 | |
Mark Qvist | dbdeba2fe0 | |
Mark Qvist | 7e34b61f37 | |
Mark Qvist | bf726ed2c7 | |
Mark Qvist | fa54a2affe | |
Mark Qvist | 62e1d0e554 | |
Mark Qvist | 9c823a038b | |
Mark Qvist | 1e6cd50f46 | |
Mark Qvist | 06716e4873 | |
Mark Qvist | 8e4a1e3ffa | |
Mark Qvist | 0abb3bd4c3 | |
Mark Qvist | 336574daed | |
Mark Qvist | 07938ba111 | |
Mark Qvist | e699eb6d25 | |
Mark Qvist | 3864549752 | |
Mark Qvist | 0b934cd0f6 | |
Mark Qvist | 5bac38a752 | |
Mark Qvist | 72c8d4d3dd | |
Mark Qvist | b8c6ea015e | |
Mark Qvist | ffe1beb7ae | |
Mark Qvist | 21c6dbfce0 | |
Mark Qvist | 70cbb8dc79 | |
Mark Qvist | 334f2a364d | |
Mark Qvist | b477354235 | |
Mark Qvist | 254c966159 | |
Mark Qvist | 7ee9b07d9c | |
Mark Qvist | 839b72469c | |
Mark Qvist | 874d76b343 | |
Mark Qvist | 7497e7aa0c | |
Mark Qvist | efa084fb0f | |
Mark Qvist | 48e4a27054 | |
Mark Qvist | 96cf6a790e | |
Mark Qvist | d7b54ff397 | |
Mark Qvist | 90ab065073 | |
Mark Qvist | b6f0784311 | |
Mark Qvist | e37ec654ee | |
Mark Qvist | b237d51276 | |
Mark Qvist | 155ea24008 | |
Mark Qvist | 8c8affc800 | |
Mark Qvist | 481062fca1 | |
Mark Qvist | ffcc5560dc | |
Mark Qvist | 09e146ef0b | |
Mark Qvist | 4c6b04ff69 | |
Mark Qvist | 9889b479d1 | |
Mark Qvist | 95dec00c76 | |
Mark Qvist | cff268926d | |
Mark Qvist | 6fa88f4e4a | |
Mark Qvist | ab8e6791fe | |
Mark Qvist | 13c45cc59a | |
Mark Qvist | 67c468884f | |
Mark Qvist | f028d44609 | |
Mark Qvist | 18b952e612 | |
Mark Qvist | 25178d8f50 | |
Mark Qvist | 1c0b7c00fd | |
Mark Qvist | 2439761529 | |
Mark Qvist | 8803dd5b65 | |
Mark Qvist | d15d04eae5 | |
Mark Qvist | bf40f74a4a | |
Mark Qvist | c0339c0f46 | |
Mark Qvist | b64bb166c0 | |
Mark Qvist | 31d30030dc | |
Mark Qvist | 556e111a98 | |
Mark Qvist | 70b0dd621b | |
Mark Qvist | f7d3212651 | |
Mark Qvist | 0a29f0cfa1 | |
Mark Qvist | 97153ad59d | |
Mark Qvist | bc8378fb60 | |
markqvist | 3320cf8da8 | |
markqvist | bb53bd3f27 | |
Mark Qvist | 73eed59fab | |
Santiago Lema | 91ede52634 | |
Dionysis Grigoropoulos | 93f13a98b2 | |
Mark Qvist | c87c5c9709 | |
markqvist | b0c6c53430 | |
Mark Qvist | 94a5222390 | |
Dionysis Grigoropoulos | 98bb304060 | |
Mark Qvist | 08bfd923ea | |
Mark Qvist | ae28f04ce4 | |
Mark Qvist | 024a742f2a | |
Mark Qvist | df184f3e54 | |
Mark Qvist | 5542410afa | |
Mark Qvist | 99205cdc0f | |
Mark Qvist | 8c936af963 | |
Mark Qvist | 7fe751e74f | |
markqvist | 6d551578c3 | |
markqvist | 40c85fb607 | |
Dionysis Grigoropoulos | 743736b376 | |
Mark Qvist | 7fdb431d70 | |
Mark Qvist | ebcc3d8912 | |
Mark Qvist | 32e29a54c3 | |
Mark Qvist | 049733c4b6 | |
Mark Qvist | 420d58527d | |
Mark Qvist | bab779a34c | |
markqvist | 45aa71b2b7 | |
SebastianObi | 6dcfe2cad6 | |
SebastianObi | f206047908 | |
Nathan Petrangelo | 6ce979a7de | |
Mark Qvist | 97f97eb063 | |
Mark Qvist | f3db762e9f | |
Mark Qvist | f9f623dfa5 | |
Mark Qvist | ffa6bec3b4 | |
Mark Qvist | 4f78973751 | |
Mark Qvist | a8a7af4b74 | |
Mark Qvist | 45295c779c | |
Mark Qvist | a82376d1f5 | |
Mark Qvist | 75c6248264 | |
Mark Qvist | 9294ab4f97 | |
Mark Qvist | f01193e854 | |
Mark Qvist | d7375bc4c3 | |
Mark Qvist | 1a860c6ffd | |
Mark Qvist | 800ed3af7a | |
Mark Qvist | 9c8e79546c | |
Mark Qvist | 4c272aa536 | |
Mark Qvist | e184861822 | |
Mark Qvist | d40e19f08d | |
Mark Qvist | 817ee0721a | |
Mark Qvist | 22ec4afdab | |
Mark Qvist | 61626897e7 | |
Mark Qvist | 6fd3edbb8f | |
Mark Qvist | fc5b02ed5d | |
Mark Qvist | a06e752b76 | |
Mark Qvist | 3a947bf81b | |
Mark Qvist | 31121ca885 | |
Mark Qvist | 387b8c46ff | |
Mark Qvist | 66fda34b20 | |
Mark Qvist | 1542c5f4fe | |
Mark Qvist | 523fc7b8f9 | |
Mark Qvist | 73faf04ea1 | |
Mark Qvist | e10ddf9d2d | |
Mark Qvist | 641a7ea75d | |
Mark Qvist | e543d5c27f | |
Mark Qvist | 01c59ab0c6 | |
Mark Qvist | a4c64abed4 | |
Mark Qvist | 7df11a6f67 | |
Mark Qvist | 1bd6020163 | |
Mark Qvist | b3ac3131b5 | |
Mark Qvist | f522cb1db1 | |
Mark Qvist | d96a4853fe | |
Mark Qvist | 52a0447fea | |
Mark Qvist | e82e6d56f1 | |
Mark Qvist | 3967ef453d | |
Mark Qvist | 76f7751d5f | |
Mark Qvist | 8716ffc873 | |
Mark Qvist | b476e4cfb0 | |
Mark Qvist | 7ec77a10d3 | |
Mark Qvist | 55a9c5ef71 | |
Mark Qvist | 6d3ba31993 | |
Mark Qvist | d3f4a674aa | |
Mark Qvist | 599ab20ed0 | |
Mark Qvist | dcf33e125b | |
Mark Qvist | 01600b96a4 | |
Mark Qvist | 64bdc4c18c | |
Mark Qvist | 0889b8a7c5 | |
Mark Qvist | 1b2fee3ab8 | |
Mark Qvist | da7a4433c0 | |
Mark Qvist | 5e5d89cc92 | |
Mark Qvist | a3bee4baa9 | |
Mark Qvist | fab83ec399 | |
Mark Qvist | b740e36985 | |
Mark Qvist | 29693c6fe2 | |
Mark Qvist | 72638f40a6 | |
Mark Qvist | 8d29e83d90 | |
Mark Qvist | 53b325d34d | |
Mark Qvist | d31cf6e297 | |
Mark Qvist | e386a5d08b | |
Mark Qvist | d467ed9ece | |
Mark Qvist | 892a467d74 | |
markqvist | 4366e71f34 | |
Mark Qvist | 7e9998b4fd | |
markqvist | 79abe93139 | |
Mark Qvist | d69d4b3920 | |
Mark Qvist | 3300541181 | |
Mark Qvist | 3848059f19 | |
Mark Qvist | 30021d89cb | |
Mark Qvist | 29019724bd | |
Maya | ba7838c04e | |
Maya | af16c68e47 | |
Maya | bda5717051 | |
Mark Qvist | fac4973329 | |
Mark Qvist | 90cfaa4e82 | |
Mark Qvist | 443aa575df | |
Mark Qvist | 619771c3a3 | |
Mark Qvist | 18a56cfd52 | |
Mark Qvist | 55c39ff27c | |
Mark Qvist | 159c7a9a52 | |
Mark Qvist | af8edc335b | |
Mark Qvist | 4d3ea37bc3 | |
Mark Qvist | 226004da94 | |
Mark Qvist | 47b358351f | |
markqvist | f5d77a1dfb | |
Aaron Heise | 9c9f0a20f9 | |
Aaron Heise | 6d9d410a70 | |
Mark Qvist | d8f3ad8d3f | |
Mark Qvist | a1b75b9746 | |
Mark Qvist | 80f3bfaece | |
Mark Qvist | 37b2d8a6ec | |
Mark Qvist | 777fea9cea | |
Mark Qvist | bbfdd37935 | |
Mark Qvist | 07484725a0 | |
Mark Qvist | 709b126a67 | |
Mark Qvist | 28e6302b3d | |
Mark Qvist | 27861e96f8 | |
markqvist | e36312a3cb | |
Aaron Heise | 5b5dbdaa91 | |
Aaron Heise | 99dc97365f | |
Aaron Heise | aac2b9f987 | |
Aaron Heise | 067c275c46 | |
Mark Qvist | 58004d7c05 | |
Mark Qvist | aa0d9c5c13 | |
Mark Qvist | 9e46950e28 | |
markqvist | a6551fc019 | |
markqvist | a06ae40797 | |
markqvist | 1db08438df | |
markqvist | 89aa51ab61 | |
Dionysis Grigoropoulos | ddb7a92c15 | |
Greg Troxel | e273900e87 | |
Aaron Heise | d2d121d49f | |
Aaron Heise | 9963cf37b8 | |
Aaron Heise | 72300cc821 | |
Aaron Heise | 8168d9bb92 | |
Aaron Heise | 8f0151fed6 | |
Aaron Heise | d3c4928eda | |
Aaron Heise | 68f95cd80b | |
Aaron Heise | 42935c8238 | |
Aaron Heise | 118acf77b8 | |
Aaron Heise | 661964277f | |
Aaron Heise | 464dc23ff0 | |
Aaron Heise | 44dc2d06c6 | |
Aaron Heise | c00b592ed9 | |
Aaron Heise | e005826151 | |
Aaron Heise | a61b15cf6a | |
Aaron Heise | fe3a3e22f7 | |
Aaron Heise | 68cb4a6740 | |
Mark Qvist | 9f06bed34c | |
Mark Qvist | 3b1936ef48 | |
Michael Faragher | 5b3d26a90a | |
markqvist | b381a61be8 | |
Mark Qvist | 1e2fa2068c | |
Mark Qvist | c604214bb9 | |
Mark Qvist | e738c9561a | |
Mark Qvist | 994d1c8ee5 | |
Mark Qvist | ce21800537 | |
Mark Qvist | d02cdd5471 | |
Mark Qvist | 7018e412d5 | |
Mark Qvist | 94f7505076 | |
Mark Qvist | b82ecf047a | |
Mark Qvist | f21b93403a | |
Mark Qvist | 59c88bc43b | |
Mark Qvist | 8e98c1b038 | |
Mark Qvist | 4d3570fe4c | |
markqvist | 3706769c33 | |
Mark Qvist | ce91c34b21 | |
Mark Qvist | e37aa5e51a | |
Mark Qvist | 80af0f4539 | |
Mark Qvist | fc818f00f1 | |
Mark Qvist | a55d39b7d4 | |
Mark Qvist | 8e264359db | |
markqvist | cbaeaa9f81 | |
Dionysis Grigoropoulos | 323c2285ce | |
Mark Qvist | 5b6d0ec337 | |
Mark Qvist | 2bbb0f5ec2 | |
Mark Qvist | e385c79abd | |
Mark Qvist | 86faf6c28d | |
Mark Qvist | 6d8a3f09e5 | |
Mark Qvist | 1e88a390f4 | |
Mark Qvist | e9ae255f84 | |
Mark Qvist | 42dfee8557 | |
Mark Qvist | 177e724457 | |
Mark Qvist | 1b55ac7f24 | |
Mark Qvist | 5447ed85c1 | |
Mark Qvist | d7aacba797 | |
Mark Qvist | b92ddeccff | |
Mark Qvist | 6fac96ec18 | |
Mark Qvist | 53ceafcebd | |
Mark Qvist | 4df67304d6 | |
Mark Qvist | ac07ba1368 | |
Mark Qvist | ece064d46e | |
Mark Qvist | 86ae42a049 | |
Mark Qvist | 08e480387b | |
Mark Qvist | f4241ae9c2 | |
Mark Qvist | b6928b7d83 | |
markqvist | 3b2fbe02c6 | |
markqvist | a38bde7801 | |
markqvist | df132d1d59 | |
Mark Qvist | 143f7fa683 | |
Dionysis Grigoropoulos | feb614d186 | |
Mark Qvist | 159be78f23 | |
Mark Qvist | 4a6c6568e2 | |
Mark Qvist | e64fa08c74 | |
markqvist | 6651976423 | |
Juraj Bednar | 5decf22b8b | |
Mark Qvist | a731a8b047 | |
Mark Qvist | 9bb9571fc9 | |
Dionysis Grigoropoulos | 6ecae615de | |
Dionysis Grigoropoulos | 72ca6316f6 | |
Mark Qvist | 0f023cc533 | |
Mark Qvist | 9f9a4a14d3 | |
Mark Qvist | 0609251270 | |
Mark Qvist | e4f0b2dc39 | |
Mark Qvist | 2ef06f2bd3 | |
Mark Qvist | c5a586175d | |
Mark Qvist | 2a1ec6592c | |
Mark Qvist | eed7698ed3 | |
Mark Qvist | 205c612a0f | |
Mark Qvist | 8d96673bec | |
Mark Qvist | 62a13eb0e8 | |
Mark Qvist | 10d03753b5 | |
Mark Qvist | f19b87759f | |
Mark Qvist | 04f009f57c | |
Mark Qvist | 78253093c7 | |
Mark Qvist | 63d54dbecb | |
Mark Qvist | 32922868b9 | |
Mark Qvist | e18f6d2969 | |
Mark Qvist | 08f4462ef8 | |
Mark Qvist | 7ed0726feb | |
Mark Qvist | 2839d39350 | |
Mark Qvist | c992573257 | |
Mark Qvist | d64e547436 | |
Mark Qvist | 7eb0e03cb9 | |
Mark Qvist | f1deef696b | |
Mark Qvist | 48e14902d0 | |
Mark Qvist | 8acf63a195 | |
Mark Qvist | 392bd65322 | |
Mark Qvist | 4ab3074d30 | |
Mark Qvist | 4de612e2fb | |
Mark Qvist | 3b192bfb47 | |
Mark Qvist | 0d562c89a7 | |
Mark Qvist | 972922fff1 | |
Mark Qvist | 296a2d91e8 | |
Mark Qvist | 446fb79786 | |
Mark Qvist | 700601d63e | |
Mark Qvist | 274c7199b0 | |
Mark Qvist | 7960226883 | |
Mark Qvist | bb74878e94 | |
Mark Qvist | 549d22be68 | |
Mark Qvist | 5c2c935b6f | |
Mark Qvist | 8402541c73 | |
Mark Qvist | c34c268a6a | |
Mark Qvist | 8fcdc4613c | |
Mark Qvist | f645fa569b | |
Mark Qvist | 469947dab9 | |
Mark Qvist | 2386fc3635 | |
Mark Qvist | e9e98a00c2 | |
Mark Qvist | b305eb8e0a | |
Mark Qvist | dd7931d421 | |
Mark Qvist | 191dce1301 | |
Mark Qvist | 3b5a27ba60 | |
Mark Qvist | 3c91f7f18b | |
Mark Qvist | 171457713b | |
Mark Qvist | 67ee8d6aab | |
Mark Qvist | 13fa7d49d9 | |
Mark Qvist | 66d921e669 | |
Mark Qvist | 85f60ea04e | |
Mark Qvist | 4870e741f6 | |
Mark Qvist | f71c1986af | |
Mark Qvist | 30d8e351dd | |
Mark Qvist | 5e62e3bc22 | |
Mark Qvist | 1a67e276ad | |
Mark Qvist | df37a4a884 | |
Mark Qvist | d26bbbd59f | |
Mark Qvist | 2a264fa7d6 | |
Mark Qvist | d5e0a461cf | |
Mark Qvist | e28dbd4afa | |
Mark Qvist | 8626dcd69f | |
Mark Qvist | e34f21f4dc | |
Mark Qvist | f692e81b8e | |
Mark Qvist | 28e43b52f9 | |
Mark Qvist | 680d17fb98 | |
Mark Qvist | 1e477c976c | |
Mark Qvist | ab301cdb79 | |
Mark Qvist | cecb4b3acb | |
Mark Qvist | de53a105a4 | |
Mark Qvist | 9e4ae3c6fe | |
Mark Qvist | 3482d84bc0 | |
Mark Qvist | 51c5c85fcd | |
Mark Qvist | 57aeab43a2 | |
Mark Qvist | 92cccddaab | |
Mark Qvist | 3de182192a | |
Mark Qvist | aca6b0c110 | |
Mark Qvist | 3d6e7a9597 | |
markqvist | 21da55dd39 | |
thatv | 9e664af1c6 | |
Mark Qvist | 7736ed589e | |
Mark Qvist | f22504d080 | |
Mark Qvist | f22e5cc200 | |
Mark Qvist | 87b73b6c67 | |
Mark Qvist | 36906f6567 | |
Mark Qvist | 52edb54d21 | |
Mark Qvist | 88b88b9b64 | |
Mark Qvist | 76fcad0b53 | |
Mark Qvist | 01e520b082 | |
Mark Qvist | 1d2a0fe4c8 | |
Mark Qvist | 0f19ced9d3 | |
Mark Qvist | 4ca32c039d | |
Mark Qvist | 81ec701240 | |
Mark Qvist | b16d614495 | |
Mark Qvist | 5f7e37187f | |
Mark Qvist | 622fd6cf46 | |
Mark Qvist | b9d73518dd | |
Mark Qvist | 17bdf45ac1 | |
Mark Qvist | 36052e2c61 | |
Mark Qvist | 06d232f889 | |
Mark Qvist | f9b3c749e0 | |
Mark Qvist | 63a59753af | |
Mark Qvist | 20696e7827 | |
Mark Qvist | 127c9862da | |
Mark Qvist | fee9473cac | |
Mark Qvist | 5337b72853 | |
Mark Qvist | 9bc5d91106 | |
Mark Qvist | 45ae66e9bf | |
Mark Qvist | f03cf34370 | |
Mark Qvist | 47db2a3bd5 | |
Mark Qvist | 40cd961eab | |
Mark Qvist | 34cdd4bf0f | |
Mark Qvist | b0ef58e5ca | |
Mark Qvist | b6020b5ea8 | |
Mark Qvist | ee544fcf31 | |
Mark Qvist | 886b0ac0ca | |
Mark Qvist | ed4070a3d1 | |
Mark Qvist | 6d6568852a | |
Mark Qvist | b479e14ca5 | |
Mark Qvist | 8fec5cedbe | |
Mark Qvist | 9852a3534b | |
Mark Qvist | 81fc920bdf | |
Mark Qvist | 5b1b18e84a | |
Mark Qvist | 9c8c143c62 | |
Mark Qvist | db9858d75f | |
Mark Qvist | 874405cbdd | |
Mark Qvist | 2a3f2b8bdc | |
Mark Qvist | 9aae06c694 | |
Mark Qvist | 70ffc38c49 | |
Mark Qvist | 73071b0755 | |
Mark Qvist | ab697dc583 | |
Mark Qvist | ecc78fa45f | |
Mark Qvist | e5309caf48 | |
Mark Qvist | 094d2f2079 | |
Mark Qvist | 5a917c9dac | |
Mark Qvist | 1df0eea0b7 | |
Mark Qvist | 718c3577db | |
Mark Qvist | 5111c32854 | |
Mark Qvist | 63d4e9a399 | |
Mark Qvist | 60773ceb16 | |
Mark Qvist | 5d6c3dd891 | |
Mark Qvist | a564dd2b2d | |
Mark Qvist | 16cf1ab1ba | |
Mark Qvist | 47e326c8a9 | |
Mark Qvist | 9a7585cbef | |
Mark Qvist | 902f7af64d | |
Mark Qvist | 004bf27526 | |
Mark Qvist | 9cad90266e | |
Mark Qvist | e9de01e10e | |
Mark Qvist | 372bedcd85 | |
Mark Qvist | 1141a3034d | |
Mark Qvist | 3f3276ca45 | |
Mark Qvist | 6e742f7267 | |
Mark Qvist | d3525943c2 | |
Mark Qvist | cb55189e5c | |
Mark Qvist | 0b98a9bff4 | |
Mark Qvist | a8d6e1780a | |
Mark Qvist | cb9840250a | |
Mark Qvist | 16f8725906 | |
markqvist | 2656157462 | |
markqvist | c9c7469b32 | |
markqvist | 0f429e2385 | |
Mark Qvist | 89d8342ce5 | |
Mark Qvist | c18997bf5b | |
Mark Qvist | 1e4dd9d6f0 | |
Mark Qvist | b296c10541 | |
Mark Qvist | 9065de5fb4 | |
Mark Qvist | 7997fd104e | |
Mark Qvist | 11667504b2 | |
Mark Qvist | 7744c4ffe6 | |
Mark Qvist | 8a61d2c8d5 | |
Mark Qvist | 1380016995 | |
markqvist | f2aff3fbd5 | |
Mark Qvist | b859984ebe | |
Mark Qvist | 9593b1c295 | |
Mark Qvist | 3d6455fb37 | |
Mark Qvist | b085127d6e | |
Mark Qvist | 80ffa5ebc3 | |
Mark Qvist | 76fb73f46c | |
Mark Qvist | e51b0077c7 | |
Mark Qvist | c18806c912 | |
Mark Qvist | 683881d6cd | |
Mark Qvist | f62d9946ac | |
Mark Qvist | 893a463663 | |
Mark Qvist | 39b788867d | |
Mark Qvist | 2abd8a1aae | |
Mark Qvist | 7940ac0812 | |
Mark Qvist | 3f2075da6f | |
Mark Qvist | e90b2866b4 | |
Mark Qvist | 8886ed5794 | |
Mark Qvist | 32ee4216fd | |
Mark Qvist | 571ad2c8fb | |
Mark Qvist | 0c47ff1ccc | |
Mark Qvist | 18f450c58b | |
Mark Qvist | b3d85b583f | |
Mark Qvist | 03695565ba | |
Mark Qvist | 3e380a8fc7 | |
Mark Qvist | fd35451927 | |
Mark Qvist | 921987c999 | |
Mark Qvist | 81e0989070 | |
Mark Qvist | 3fa7698438 | |
Mark Qvist | 75e32af1c5 | |
Mark Qvist | 9775893840 | |
Mark Qvist | e5c0ee4153 | |
Mark Qvist | 4042dd6ef7 | |
Mark Qvist | af538e0489 | |
Mark Qvist | 8f4cf433ba | |
Mark Qvist | c55e1e9628 | |
Mark Qvist | be02586133 | |
Mark Qvist | 6db742ade7 | |
Mark Qvist | 6a53298aa2 | |
Mark Qvist | f00b6a6fdb | |
markqvist | dc0a0735db | |
huyndao | b230edd21d | |
huyndao | 30e75b1bfb | |
Mark Qvist | 7f70ffdc21 | |
Mark Qvist | 6e6b49dcd2 | |
Mark Qvist | 383f96d82a | |
Mark Qvist | ebef2da7a8 | |
Mark Qvist | 4946d9f2eb | |
Mark Qvist | fcb61e3ebf | |
Mark Qvist | eae788957a | |
Mark Qvist | 045a9d8451 | |
Mark Qvist | da644d33ea | |
Mark Qvist | e03fc38920 | |
Mark Qvist | c36c0368ef | |
Mark Qvist | 3d979e2d65 | |
Mark Qvist | 5158613501 | |
Mark Qvist | b53185779a | |
Mark Qvist | 5b63f84491 | |
Mark Qvist | fd2cc1231f | |
Mark Qvist | 76950ee3de | |
markqvist | 8565b2fdf5 | |
markqvist | 2a915eab2d | |
Joshua Fuller | 36654c1414 | |
Mark Qvist | fdf0456cf0 | |
Mark Qvist | 8cff18f8ce | |
Mark Qvist | 5e072affe4 | |
Mark Qvist | fc4c7638a6 | |
Mark Qvist | 532f9ee665 | |
Mark Qvist | 4a725de935 | |
Mark Qvist | 2335a71427 | |
Mark Qvist | 3e70dd6134 | |
Mark Qvist | 474521056b | |
Mark Qvist | d33154bfdb | |
Mark Qvist | 8f82a2b87f | |
Mark Qvist | 304610c682 | |
Mark Qvist | bc39a1acf1 | |
Mark Qvist | 20b7278f7b | |
Mark Qvist | 1f66a9b0c0 | |
Mark Qvist | f464ecfcb5 | |
Mark Qvist | 49fdeb9bc4 | |
Mark Qvist | 40560a31f2 | |
Mark Qvist | f7d8a4b3b3 | |
Mark Qvist | c498bf5668 | |
Mark Qvist | 2e19304ebf | |
Mark Qvist | 1cd7c85a52 | |
Mark Qvist | 171f43f4e3 | |
Mark Qvist | 09a1088437 | |
Mark Qvist | 6346bc54a8 | |
Mark Qvist | 40e25d8e40 | |
Mark Qvist | e19438fdcc | |
markqvist | d85ea07b5e | |
Mark Qvist | 4dda0e8a5b | |
Mark Qvist | 5faf13d505 | |
Mark Qvist | 2be1c7633d | |
Mark Qvist | 6ac2f437b9 | |
Mark Qvist | 2fe9dec459 | |
Mark Qvist | 8f8da080f5 | |
Mark Qvist | 01a973db91 | |
Mark Qvist | 1c4528dca1 | |
Mark Qvist | a99031873d | |
Mark Qvist | ab1186eaf7 | |
Mark Qvist | 940c889440 | |
Mark Qvist | ac7c36029b | |
Mark Qvist | c79811e040 | |
Mark Qvist | 7545613c52 | |
Mark Qvist | 7bd6da034a | |
Mark Qvist | 34f10d1196 | |
Mark Qvist | be84e8a731 | |
Mark Qvist | 7331bd2c09 | |
Mark Qvist | 6bfd0bf4eb | |
Mark Qvist | 3013c10180 | |
Mark Qvist | 95a34dad4b | |
Mark Qvist | a3bc2ef38f | |
Mark Qvist | aa255d0713 | |
Mark Qvist | 5a8152c589 | |
Mark Qvist | 8a24dbae40 | |
Mark Qvist | 2f1329e581 | |
Mark Qvist | 2166294a7a | |
Mark Qvist | 8042f5eaa1 | |
Mark Qvist | 1b1ab42aaa | |
Mark Qvist | ae8fcb88d8 | |
Mark Qvist | 98b232bc4c | |
Mark Qvist | d7a444556a | |
Mark Qvist | 58eaceb48c | |
Mark Qvist | 3c81f93d4a | |
Mark Qvist | 2685e043ea | |
Mark Qvist | 214ee9d771 | |
Mark Qvist | d39c1893e7 | |
Mark Qvist | 548cbd50d8 | |
Mark Qvist | 6b06875c42 | |
Mark Qvist | d7262c7cbe | |
Mark Qvist | d9a021465e | |
Mark Qvist | 8451bbe7e6 | |
Mark Qvist | 1ac7238347 | |
Mark Qvist | ea7762cbc0 | |
Mark Qvist | c4a7d17b2f | |
Mark Qvist | c758c4d279 | |
Mark Qvist | d136eac32b | |
Mark Qvist | f74e6d12c9 | |
Mark Qvist | 6f68d6edc4 | |
Mark Qvist | 076d6b09c4 | |
Mark Qvist | 8c484c786f | |
Mark Qvist | 363d56d49d | |
markqvist | 2a581a9a9b | |
Mark Qvist | 2779852417 | |
Mark Qvist | e0f69344c2 | |
Mark Qvist | 469c9919cb | |
Mark Qvist | 6518370d79 | |
Mark Qvist | ffe61e701a | |
Mark Qvist | 7f65c767f0 | |
Mark Qvist | 157a54d4a4 | |
Mark Qvist | c8c0f77c81 | |
Mark Qvist | 4c3a82cf20 | |
Mark Qvist | 1ec83b535f | |
Mark Qvist | 31914a10aa | |
Mark Qvist | 6e369bf82f | |
Mark Qvist | 39059a365d | |
Mark Qvist | 0b2dba7977 | |
Mark Qvist | c6e2ba2cf3 | |
Mark Qvist | c5918395de | |
Mark Qvist | 861ac92c4c | |
Mark Qvist | 715e35d626 | |
Mark Qvist | a8ea7bcca6 | |
Mark Qvist | 534a8825eb | |
Mark Qvist | 89f3c0f649 | |
Mark Qvist | e4a82d5358 | |
Mark Qvist | 68cd79768b | |
Mark Qvist | 701c624d0a | |
Mark Qvist | ec90af750d | |
Mark Qvist | 2c1b3a0e5b | |
Mark Qvist | 02968baa76 | |
Mark Qvist | 06fefebc08 | |
Mark Qvist | 513a82e363 | |
Mark Qvist | a4b80e7ddb | |
Mark Qvist | be6910e4e0 | |
Mark Qvist | 0a8b755230 | |
Mark Qvist | d334613888 | |
Mark Qvist | 14bdcaf770 | |
Mark Qvist | 592c405067 | |
Mark Qvist | bb8012ad50 | |
Mark Qvist | 648e9a68b8 | |
Mark Qvist | 8c167b8f3d | |
Mark Qvist | bd933dc1df | |
Mark Qvist | 76f12b4854 | |
Mark Qvist | 30af212217 | |
Mark Qvist | 6c22ccc6d4 | |
Mark Qvist | 26dae3830e | |
Mark Qvist | a776d59f03 | |
Mark Qvist | 5b20caf759 | |
Mark Qvist | a800ce43f3 | |
Mark Qvist | 7916b8e7f4 | |
Mark Qvist | 60e3c7348a | |
Mark Qvist | cc9970c83e | |
Mark Qvist | c46b98f163 | |
Mark Qvist | 86061f9f47 | |
Mark Qvist | e0b795b4d0 | |
Mark Qvist | 34efbc6100 | |
Mark Qvist | 94edc8eff3 | |
Mark Qvist | e2aeb56c12 | |
Mark Qvist | 9a4325ce8e | |
Mark Qvist | 06fffe5a94 | |
Mark Qvist | 7a596882a8 | |
Mark Qvist | 76f86f782a | |
Mark Qvist | 4bd5f05e0e | |
Mark Qvist | 5d3a0efc89 | |
Mark Qvist | d1a461a2b3 | |
Mark Qvist | 0b1e7df31a | |
Mark Qvist | 301661c29e | |
Mark Qvist | b2b6708e8f | |
Mark Qvist | 19a033db96 | |
Mark Qvist | 5bb510b589 | |
Mark Qvist | f1dcda82ac | |
Mark Qvist | d24f3a490a | |
Mark Qvist | 715a84c6f2 | |
Mark Qvist | 379e56b2ce |
|
@ -0,0 +1,11 @@
|
|||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: ✨ Feature Request or Idea
|
||||
url: https://github.com/markqvist/Reticulum/discussions/new?category=ideas
|
||||
about: Propose and discuss features and ideas
|
||||
- name: 💬 Questions, Help & Discussion
|
||||
about: Ask anything, or get help
|
||||
url: https://github.com/markqvist/Reticulum/discussions/new/choose
|
||||
- name: 📖 Read the Reticulum Manual
|
||||
url: https://markqvist.github.io/Reticulum/manual/
|
||||
about: The complete documentation for Reticulum
|
|
@ -0,0 +1,35 @@
|
|||
---
|
||||
name: "\U0001F41B Bug Report"
|
||||
about: Report a reproducible bug
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Read the Contribution Guidelines**
|
||||
Before creating a bug report on this issue tracker, you **must** read the [Contribution Guidelines](https://github.com/markqvist/Reticulum/blob/master/Contributing.md). Issues that do not follow the contribution guidelines **will be deleted without comment**.
|
||||
|
||||
- The issue tracker is used by developers of this project. **Do not use it to ask general questions, or for support requests**.
|
||||
- Ideas and feature requests can be made on the [Discussions](https://github.com/markqvist/Reticulum/discussions). **Only** feature requests accepted by maintainers and developers are tracked and included on the issue tracker. **Do not post feature requests here**.
|
||||
- After reading the [Contribution Guidelines](https://github.com/markqvist/Reticulum/blob/master/Contributing.md), delete this section from your bug report.
|
||||
|
||||
**Describe the Bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Describe in detail how to reproduce the bug.
|
||||
|
||||
**Expected Behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Logs & Screenshots**
|
||||
Please include any relevant log output. If applicable, also add screenshots to help explain your problem.
|
||||
|
||||
**System Information**
|
||||
- OS and version
|
||||
- Python version
|
||||
- Program version
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
|
@ -7,4 +7,9 @@ RNS/Utilities/RNS
|
|||
build
|
||||
dist
|
||||
docs/build
|
||||
rns*.egg-info
|
||||
rns*.egg-info
|
||||
profile.data
|
||||
tests/rnsconfig/storage
|
||||
tests/rnsconfig/logfile*
|
||||
*.data
|
||||
*.result
|
||||
|
|
|
@ -0,0 +1,43 @@
|
|||
# Contributing to Reticulum
|
||||
|
||||
Welcome, and thank you for your interest in contributing to Reticulum!
|
||||
|
||||
Apart from writing code, there are many ways in which you can contribute. Before interacting with this community, read these short and simple guidelines.
|
||||
|
||||
## Expected Conduct
|
||||
|
||||
First and foremost, there is one simple requirement for taking part in this community: While we primarily interact virtually, your actions matter and have real consequences. Therefore: **Act like a responsible, civilized person** - also in the face of disputes and heated disagreements. Speak your mind here, discussions are welcome. Just do so in the spirit of being face-to-face with everyone else. Thank you.
|
||||
|
||||
## Asking Questions
|
||||
|
||||
If you want to ask a question, **do not open an issue**. The issue tracker is used by people *working on Reticulum* to track bugs, issues and improvements.
|
||||
|
||||
Instead, ask away on the [discussions](https://github.com/markqvist/Reticulum/discussions) or on the [Reticulum Matrix channel](https://matrix.to/#/#reticulum:matrix.org) at `#reticulum:matrix.org`
|
||||
|
||||
## Providing Feedback & Ideas
|
||||
|
||||
Likewise, feedback, ideas and feature requests are a very welcome way to contribute, and should also be posted on the [discussions](https://github.com/markqvist/Reticulum/discussions), or on the [Reticulum Matrix channel](https://matrix.to/#/#reticulum:matrix.org) at `#reticulum:matrix.org`.
|
||||
|
||||
Please do not post feature requests or general ideas on the issue tracker, or in direct messages to the primary developers. You are much more likely to get a response and start a constructive discussion by posting your ideas in the public channels created for these purposes.
|
||||
|
||||
## Reporting Issues
|
||||
|
||||
If you have found a bug or issue in this project, please report it using the [issue tracker](https://github.com/markqvist/Reticulum/issues). If at all possible, be sure to include details on how to reproduce the bug.
|
||||
|
||||
Anything submitted to the issue tracker that does not follow these guidelines will be closed and removed without comments or explanation.
|
||||
|
||||
## Writing Code
|
||||
|
||||
If you are interested in contributing code, fixing open issues or adding features, please coordinate the effort with the maintainer or one of the main developers **before** submitting a pull request. Before deciding to contribute, it is also a good idea to ensure your efforts are in alignment with the [Roadmap](./Roadmap.md) and current development focus.
|
||||
|
||||
Pull requests have a high chance of being accepted if they are:
|
||||
|
||||
- In alignment with the [Roadmap](./Roadmap.md) or solve an open issue or feature request
|
||||
- Sufficiently tested to work with all API functions, and pass the standard test suite
|
||||
- Functionally and conceptually complete and well-designed
|
||||
- Not simply formatting or code style changes
|
||||
- Well-documented
|
||||
|
||||
Even new ideas and proposals that have not been approved by a maintainer, or fall outside the established roadmap, are *occasionally* accepted - if they possess the remaining of the above qualities. If not, they will be closed and removed without comments or explanation.
|
||||
|
||||
By contributing code to this project, you agree that copyright for the code is transferred to the Reticulum maintainers and that the code is irrevocably placed under the [MIT license](./LICENSE).
|
|
@ -32,7 +32,7 @@ def program_setup(configpath):
|
|||
# Destinations are endpoints in Reticulum, that can be addressed
|
||||
# and communicated with. Destinations can also announce their
|
||||
# existence, which will let the network know they are reachable
|
||||
# and autoomatically create paths to them, from anywhere else
|
||||
# and automatically create paths to them, from anywhere else
|
||||
# in the network.
|
||||
destination_1 = RNS.Destination(
|
||||
identity,
|
||||
|
@ -53,7 +53,7 @@ def program_setup(configpath):
|
|||
)
|
||||
|
||||
# We configure the destinations to automatically prove all
|
||||
# packets adressed to it. By doing this, RNS will automatically
|
||||
# packets addressed to it. By doing this, RNS will automatically
|
||||
# generate a proof for each incoming packet and transmit it
|
||||
# back to the sender of that packet. This will let anyone that
|
||||
# tries to communicate with the destination know whether their
|
||||
|
@ -130,10 +130,11 @@ class ExampleAnnounceHandler:
|
|||
RNS.prettyhexrep(destination_hash)
|
||||
)
|
||||
|
||||
RNS.log(
|
||||
"The announce contained the following app data: "+
|
||||
app_data.decode("utf-8")
|
||||
)
|
||||
if app_data:
|
||||
RNS.log(
|
||||
"The announce contained the following app data: "+
|
||||
app_data.decode("utf-8")
|
||||
)
|
||||
|
||||
##########################################################
|
||||
#### Program Startup #####################################
|
||||
|
|
|
@ -0,0 +1,323 @@
|
|||
##########################################################
|
||||
# This RNS example demonstrates how to set up a link to #
|
||||
# a destination, and pass binary data over it using a #
|
||||
# channel buffer. #
|
||||
##########################################################
|
||||
from __future__ import annotations
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
|
||||
import RNS
|
||||
from RNS.vendor import umsgpack
|
||||
|
||||
# Let's define an app name. We'll use this for all
|
||||
# destinations we create. Since this echo example
|
||||
# is part of a range of example utilities, we'll put
|
||||
# them all within the app namespace "example_utilities"
|
||||
APP_NAME = "example_utilities"
|
||||
|
||||
|
||||
##########################################################
|
||||
#### Server Part #########################################
|
||||
##########################################################
|
||||
|
||||
# A reference to the latest client link that connected
|
||||
latest_client_link = None
|
||||
|
||||
# A reference to the latest buffer object
|
||||
latest_buffer = None
|
||||
|
||||
# This initialisation is executed when the users chooses
|
||||
# to run as a server
|
||||
def server(configpath):
|
||||
# We must first initialise Reticulum
|
||||
reticulum = RNS.Reticulum(configpath)
|
||||
|
||||
# Randomly create a new identity for our example
|
||||
server_identity = RNS.Identity()
|
||||
|
||||
# We create a destination that clients can connect to. We
|
||||
# want clients to create links to this destination, so we
|
||||
# need to create a "single" destination type.
|
||||
server_destination = RNS.Destination(
|
||||
server_identity,
|
||||
RNS.Destination.IN,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
"bufferexample"
|
||||
)
|
||||
|
||||
# We configure a function that will get called every time
|
||||
# a new client creates a link to this destination.
|
||||
server_destination.set_link_established_callback(client_connected)
|
||||
|
||||
# Everything's ready!
|
||||
# Let's Wait for client requests or user input
|
||||
server_loop(server_destination)
|
||||
|
||||
def server_loop(destination):
|
||||
# Let the user know that everything is ready
|
||||
RNS.log(
|
||||
"Link buffer example "+
|
||||
RNS.prettyhexrep(destination.hash)+
|
||||
" running, waiting for a connection."
|
||||
)
|
||||
|
||||
RNS.log("Hit enter to manually send an announce (Ctrl-C to quit)")
|
||||
|
||||
# We enter a loop that runs until the users exits.
|
||||
# If the user hits enter, we will announce our server
|
||||
# destination on the network, which will let clients
|
||||
# know how to create messages directed towards it.
|
||||
while True:
|
||||
entered = input()
|
||||
destination.announce()
|
||||
RNS.log("Sent announce from "+RNS.prettyhexrep(destination.hash))
|
||||
|
||||
# When a client establishes a link to our server
|
||||
# destination, this function will be called with
|
||||
# a reference to the link.
|
||||
def client_connected(link):
|
||||
global latest_client_link, latest_buffer
|
||||
latest_client_link = link
|
||||
|
||||
RNS.log("Client connected")
|
||||
link.set_link_closed_callback(client_disconnected)
|
||||
|
||||
# If a new connection is received, the old reader
|
||||
# needs to be disconnected.
|
||||
if latest_buffer:
|
||||
latest_buffer.close()
|
||||
|
||||
|
||||
# Create buffer objects.
|
||||
# The stream_id parameter to these functions is
|
||||
# a bit like a file descriptor, except that it
|
||||
# is unique to the *receiver*.
|
||||
#
|
||||
# In this example, both the reader and the writer
|
||||
# use stream_id = 0, but there are actually two
|
||||
# separate unidirectional streams flowing in
|
||||
# opposite directions.
|
||||
#
|
||||
channel = link.get_channel()
|
||||
latest_buffer = RNS.Buffer.create_bidirectional_buffer(0, 0, channel, server_buffer_ready)
|
||||
|
||||
def client_disconnected(link):
|
||||
RNS.log("Client disconnected")
|
||||
|
||||
def server_buffer_ready(ready_bytes: int):
|
||||
"""
|
||||
Callback from buffer when buffer has data available
|
||||
|
||||
:param ready_bytes: The number of bytes ready to read
|
||||
"""
|
||||
global latest_buffer
|
||||
|
||||
data = latest_buffer.read(ready_bytes)
|
||||
data = data.decode("utf-8")
|
||||
|
||||
RNS.log("Received data over the buffer: " + data)
|
||||
|
||||
reply_message = "I received \""+data+"\" over the buffer"
|
||||
reply_message = reply_message.encode("utf-8")
|
||||
latest_buffer.write(reply_message)
|
||||
latest_buffer.flush()
|
||||
|
||||
|
||||
|
||||
|
||||
##########################################################
|
||||
#### Client Part #########################################
|
||||
##########################################################
|
||||
|
||||
# A reference to the server link
|
||||
server_link = None
|
||||
|
||||
# A reference to the buffer object, needed to share the
|
||||
# object from the link connected callback to the client
|
||||
# loop.
|
||||
buffer = None
|
||||
|
||||
# This initialisation is executed when the users chooses
|
||||
# to run as a client
|
||||
def client(destination_hexhash, configpath):
|
||||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
exit()
|
||||
|
||||
# We must first initialise Reticulum
|
||||
reticulum = RNS.Reticulum(configpath)
|
||||
|
||||
# Check if we know a path to the destination
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
RNS.log("Destination is not yet known. Requesting path and waiting for announce to arrive...")
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
while not RNS.Transport.has_path(destination_hash):
|
||||
time.sleep(0.1)
|
||||
|
||||
# Recall the server identity
|
||||
server_identity = RNS.Identity.recall(destination_hash)
|
||||
|
||||
# Inform the user that we'll begin connecting
|
||||
RNS.log("Establishing link with server...")
|
||||
|
||||
# When the server identity is known, we set
|
||||
# up a destination
|
||||
server_destination = RNS.Destination(
|
||||
server_identity,
|
||||
RNS.Destination.OUT,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
"bufferexample"
|
||||
)
|
||||
|
||||
# And create a link
|
||||
link = RNS.Link(server_destination)
|
||||
|
||||
# We'll also set up functions to inform the
|
||||
# user when the link is established or closed
|
||||
link.set_link_established_callback(link_established)
|
||||
link.set_link_closed_callback(link_closed)
|
||||
|
||||
# Everything is set up, so let's enter a loop
|
||||
# for the user to interact with the example
|
||||
client_loop()
|
||||
|
||||
def client_loop():
|
||||
global server_link
|
||||
|
||||
# Wait for the link to become active
|
||||
while not server_link:
|
||||
time.sleep(0.1)
|
||||
|
||||
should_quit = False
|
||||
while not should_quit:
|
||||
try:
|
||||
print("> ", end=" ")
|
||||
text = input()
|
||||
|
||||
# Check if we should quit the example
|
||||
if text == "quit" or text == "q" or text == "exit":
|
||||
should_quit = True
|
||||
server_link.teardown()
|
||||
else:
|
||||
# Otherwise, encode the text and write it to the buffer.
|
||||
text = text.encode("utf-8")
|
||||
buffer.write(text)
|
||||
# Flush the buffer to force the data to be sent.
|
||||
buffer.flush()
|
||||
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Error while sending data over the link buffer: "+str(e))
|
||||
should_quit = True
|
||||
server_link.teardown()
|
||||
|
||||
# This function is called when a link
|
||||
# has been established with the server
|
||||
def link_established(link):
|
||||
# We store a reference to the link
|
||||
# instance for later use
|
||||
global server_link, buffer
|
||||
server_link = link
|
||||
|
||||
# Create buffer, see server_client_connected() for
|
||||
# more detail about setting up the buffer.
|
||||
channel = link.get_channel()
|
||||
buffer = RNS.Buffer.create_bidirectional_buffer(0, 0, channel, client_buffer_ready)
|
||||
|
||||
# Inform the user that the server is
|
||||
# connected
|
||||
RNS.log("Link established with server, enter some text to send, or \"quit\" to quit")
|
||||
|
||||
# When a link is closed, we'll inform the
|
||||
# user, and exit the program
|
||||
def link_closed(link):
|
||||
if link.teardown_reason == RNS.Link.TIMEOUT:
|
||||
RNS.log("The link timed out, exiting now")
|
||||
elif link.teardown_reason == RNS.Link.DESTINATION_CLOSED:
|
||||
RNS.log("The link was closed by the server, exiting now")
|
||||
else:
|
||||
RNS.log("Link closed, exiting now")
|
||||
|
||||
RNS.Reticulum.exit_handler()
|
||||
time.sleep(1.5)
|
||||
os._exit(0)
|
||||
|
||||
# When the buffer has new data, read it and write it to the terminal.
|
||||
def client_buffer_ready(ready_bytes: int):
|
||||
global buffer
|
||||
data = buffer.read(ready_bytes)
|
||||
RNS.log("Received data over the link buffer: " + data.decode("utf-8"))
|
||||
print("> ", end=" ")
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
##########################################################
|
||||
#### Program Startup #####################################
|
||||
##########################################################
|
||||
|
||||
# This part of the program runs at startup,
|
||||
# and parses input of from the user, and then
|
||||
# starts up the desired program mode.
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
parser = argparse.ArgumentParser(description="Simple buffer example")
|
||||
|
||||
parser.add_argument(
|
||||
"-s",
|
||||
"--server",
|
||||
action="store_true",
|
||||
help="wait for incoming link requests from clients"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--config",
|
||||
action="store",
|
||||
default=None,
|
||||
help="path to alternative Reticulum config directory",
|
||||
type=str
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"destination",
|
||||
nargs="?",
|
||||
default=None,
|
||||
help="hexadecimal hash of the server destination",
|
||||
type=str
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.config:
|
||||
configarg = args.config
|
||||
else:
|
||||
configarg = None
|
||||
|
||||
if args.server:
|
||||
server(configarg)
|
||||
else:
|
||||
if (args.destination == None):
|
||||
print("")
|
||||
parser.print_help()
|
||||
print("")
|
||||
else:
|
||||
client(args.destination, configarg)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
exit()
|
|
@ -0,0 +1,390 @@
|
|||
##########################################################
|
||||
# This RNS example demonstrates how to set up a link to #
|
||||
# a destination, and pass structured messages over it #
|
||||
# using a channel. #
|
||||
##########################################################
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
|
||||
import RNS
|
||||
from RNS.vendor import umsgpack
|
||||
|
||||
# Let's define an app name. We'll use this for all
|
||||
# destinations we create. Since this echo example
|
||||
# is part of a range of example utilities, we'll put
|
||||
# them all within the app namespace "example_utilities"
|
||||
APP_NAME = "example_utilities"
|
||||
|
||||
##########################################################
|
||||
#### Shared Objects ######################################
|
||||
##########################################################
|
||||
|
||||
# Channel data must be structured in a subclass of
|
||||
# MessageBase. This ensures that the channel will be able
|
||||
# to serialize and deserialize the object and multiplex it
|
||||
# with other objects. Both ends of a link will need the
|
||||
# same object definitions to be able to communicate over
|
||||
# a channel.
|
||||
#
|
||||
# Note: The objects we wish to use over the channel must
|
||||
# be registered with the channel, and each link has a
|
||||
# different channel instance. See the client_connected
|
||||
# and link_established functions in this example to see
|
||||
# how message types are registered.
|
||||
|
||||
# Let's make a simple message class called StringMessage
|
||||
# that will convey a string with a timestamp.
|
||||
|
||||
class StringMessage(RNS.MessageBase):
|
||||
# The MSGTYPE class variable needs to be assigned a
|
||||
# 2 byte integer value. This identifier allows the
|
||||
# channel to look up your message's constructor when a
|
||||
# message arrives over the channel.
|
||||
#
|
||||
# MSGTYPE must be unique across all message types we
|
||||
# register with the channel. MSGTYPEs >= 0xf000 are
|
||||
# reserved for the system.
|
||||
MSGTYPE = 0x0101
|
||||
|
||||
# The constructor of our object must be callable with
|
||||
# no arguments. We can have parameters, but they must
|
||||
# have a default assignment.
|
||||
#
|
||||
# This is needed so the channel can create an empty
|
||||
# version of our message into which the incoming
|
||||
# message can be unpacked.
|
||||
def __init__(self, data=None):
|
||||
self.data = data
|
||||
self.timestamp = datetime.now()
|
||||
|
||||
# Finally, our message needs to implement functions
|
||||
# the channel can call to pack and unpack our message
|
||||
# to/from the raw packet payload. We'll use the
|
||||
# umsgpack package bundled with RNS. We could also use
|
||||
# the struct package bundled with Python if we wanted
|
||||
# more control over the structure of the packed bytes.
|
||||
#
|
||||
# Also note that packed message objects must fit
|
||||
# entirely in one packet. The number of bytes
|
||||
# available for message payloads can be queried from
|
||||
# the channel using the Channel.MDU property. The
|
||||
# channel MDU is slightly less than the link MDU due
|
||||
# to encoding the message header.
|
||||
|
||||
# The pack function encodes the message contents into
|
||||
# a byte stream.
|
||||
def pack(self) -> bytes:
|
||||
return umsgpack.packb((self.data, self.timestamp))
|
||||
|
||||
# And the unpack function decodes a byte stream into
|
||||
# the message contents.
|
||||
def unpack(self, raw):
|
||||
self.data, self.timestamp = umsgpack.unpackb(raw)
|
||||
|
||||
|
||||
##########################################################
|
||||
#### Server Part #########################################
|
||||
##########################################################
|
||||
|
||||
# A reference to the latest client link that connected
|
||||
latest_client_link = None
|
||||
|
||||
# This initialisation is executed when the users chooses
|
||||
# to run as a server
|
||||
def server(configpath):
|
||||
# We must first initialise Reticulum
|
||||
reticulum = RNS.Reticulum(configpath)
|
||||
|
||||
# Randomly create a new identity for our link example
|
||||
server_identity = RNS.Identity()
|
||||
|
||||
# We create a destination that clients can connect to. We
|
||||
# want clients to create links to this destination, so we
|
||||
# need to create a "single" destination type.
|
||||
server_destination = RNS.Destination(
|
||||
server_identity,
|
||||
RNS.Destination.IN,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
"channelexample"
|
||||
)
|
||||
|
||||
# We configure a function that will get called every time
|
||||
# a new client creates a link to this destination.
|
||||
server_destination.set_link_established_callback(client_connected)
|
||||
|
||||
# Everything's ready!
|
||||
# Let's Wait for client requests or user input
|
||||
server_loop(server_destination)
|
||||
|
||||
def server_loop(destination):
|
||||
# Let the user know that everything is ready
|
||||
RNS.log(
|
||||
"Link example "+
|
||||
RNS.prettyhexrep(destination.hash)+
|
||||
" running, waiting for a connection."
|
||||
)
|
||||
|
||||
RNS.log("Hit enter to manually send an announce (Ctrl-C to quit)")
|
||||
|
||||
# We enter a loop that runs until the users exits.
|
||||
# If the user hits enter, we will announce our server
|
||||
# destination on the network, which will let clients
|
||||
# know how to create messages directed towards it.
|
||||
while True:
|
||||
entered = input()
|
||||
destination.announce()
|
||||
RNS.log("Sent announce from "+RNS.prettyhexrep(destination.hash))
|
||||
|
||||
# When a client establishes a link to our server
|
||||
# destination, this function will be called with
|
||||
# a reference to the link.
|
||||
def client_connected(link):
|
||||
global latest_client_link
|
||||
latest_client_link = link
|
||||
|
||||
RNS.log("Client connected")
|
||||
link.set_link_closed_callback(client_disconnected)
|
||||
|
||||
# Register message types and add callback to channel
|
||||
channel = link.get_channel()
|
||||
channel.register_message_type(StringMessage)
|
||||
channel.add_message_handler(server_message_received)
|
||||
|
||||
def client_disconnected(link):
|
||||
RNS.log("Client disconnected")
|
||||
|
||||
def server_message_received(message):
|
||||
"""
|
||||
A message handler
|
||||
@param message: An instance of a subclass of MessageBase
|
||||
@return: True if message was handled
|
||||
"""
|
||||
global latest_client_link
|
||||
# When a message is received over any active link,
|
||||
# the replies will all be directed to the last client
|
||||
# that connected.
|
||||
|
||||
# In a message handler, any deserializable message
|
||||
# that arrives over the link's channel will be passed
|
||||
# to all message handlers, unless a preceding handler indicates it
|
||||
# has handled the message.
|
||||
#
|
||||
#
|
||||
if isinstance(message, StringMessage):
|
||||
RNS.log("Received data on the link: " + message.data + " (message created at " + str(message.timestamp) + ")")
|
||||
|
||||
reply_message = StringMessage("I received \""+message.data+"\" over the link")
|
||||
latest_client_link.get_channel().send(reply_message)
|
||||
|
||||
# Incoming messages are sent to each message
|
||||
# handler added to the channel, in the order they
|
||||
# were added.
|
||||
# If any message handler returns True, the message
|
||||
# is considered handled and any subsequent
|
||||
# handlers are skipped.
|
||||
return True
|
||||
|
||||
|
||||
##########################################################
|
||||
#### Client Part #########################################
|
||||
##########################################################
|
||||
|
||||
# A reference to the server link
|
||||
server_link = None
|
||||
|
||||
# This initialisation is executed when the users chooses
|
||||
# to run as a client
|
||||
def client(destination_hexhash, configpath):
|
||||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
exit()
|
||||
|
||||
# We must first initialise Reticulum
|
||||
reticulum = RNS.Reticulum(configpath)
|
||||
|
||||
# Check if we know a path to the destination
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
RNS.log("Destination is not yet known. Requesting path and waiting for announce to arrive...")
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
while not RNS.Transport.has_path(destination_hash):
|
||||
time.sleep(0.1)
|
||||
|
||||
# Recall the server identity
|
||||
server_identity = RNS.Identity.recall(destination_hash)
|
||||
|
||||
# Inform the user that we'll begin connecting
|
||||
RNS.log("Establishing link with server...")
|
||||
|
||||
# When the server identity is known, we set
|
||||
# up a destination
|
||||
server_destination = RNS.Destination(
|
||||
server_identity,
|
||||
RNS.Destination.OUT,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
"channelexample"
|
||||
)
|
||||
|
||||
# And create a link
|
||||
link = RNS.Link(server_destination)
|
||||
|
||||
# We'll also set up functions to inform the
|
||||
# user when the link is established or closed
|
||||
link.set_link_established_callback(link_established)
|
||||
link.set_link_closed_callback(link_closed)
|
||||
|
||||
# Everything is set up, so let's enter a loop
|
||||
# for the user to interact with the example
|
||||
client_loop()
|
||||
|
||||
def client_loop():
|
||||
global server_link
|
||||
|
||||
# Wait for the link to become active
|
||||
while not server_link:
|
||||
time.sleep(0.1)
|
||||
|
||||
should_quit = False
|
||||
while not should_quit:
|
||||
try:
|
||||
print("> ", end=" ")
|
||||
text = input()
|
||||
|
||||
# Check if we should quit the example
|
||||
if text == "quit" or text == "q" or text == "exit":
|
||||
should_quit = True
|
||||
server_link.teardown()
|
||||
|
||||
# If not, send the entered text over the link
|
||||
if text != "":
|
||||
message = StringMessage(text)
|
||||
packed_size = len(message.pack())
|
||||
channel = server_link.get_channel()
|
||||
if channel.is_ready_to_send():
|
||||
if packed_size <= channel.MDU:
|
||||
channel.send(message)
|
||||
else:
|
||||
RNS.log(
|
||||
"Cannot send this packet, the data size of "+
|
||||
str(packed_size)+" bytes exceeds the link packet MDU of "+
|
||||
str(channel.MDU)+" bytes",
|
||||
RNS.LOG_ERROR
|
||||
)
|
||||
else:
|
||||
RNS.log("Channel is not ready to send, please wait for " +
|
||||
"pending messages to complete.", RNS.LOG_ERROR)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Error while sending data over the link: "+str(e))
|
||||
should_quit = True
|
||||
server_link.teardown()
|
||||
|
||||
# This function is called when a link
|
||||
# has been established with the server
|
||||
def link_established(link):
|
||||
# We store a reference to the link
|
||||
# instance for later use
|
||||
global server_link
|
||||
server_link = link
|
||||
|
||||
# Register messages and add handler to channel
|
||||
channel = link.get_channel()
|
||||
channel.register_message_type(StringMessage)
|
||||
channel.add_message_handler(client_message_received)
|
||||
|
||||
# Inform the user that the server is
|
||||
# connected
|
||||
RNS.log("Link established with server, enter some text to send, or \"quit\" to quit")
|
||||
|
||||
# When a link is closed, we'll inform the
|
||||
# user, and exit the program
|
||||
def link_closed(link):
|
||||
if link.teardown_reason == RNS.Link.TIMEOUT:
|
||||
RNS.log("The link timed out, exiting now")
|
||||
elif link.teardown_reason == RNS.Link.DESTINATION_CLOSED:
|
||||
RNS.log("The link was closed by the server, exiting now")
|
||||
else:
|
||||
RNS.log("Link closed, exiting now")
|
||||
|
||||
RNS.Reticulum.exit_handler()
|
||||
time.sleep(1.5)
|
||||
os._exit(0)
|
||||
|
||||
# When a packet is received over the channel, we
|
||||
# simply print out the data.
|
||||
def client_message_received(message):
|
||||
if isinstance(message, StringMessage):
|
||||
RNS.log("Received data on the link: " + message.data + " (message created at " + str(message.timestamp) + ")")
|
||||
print("> ", end=" ")
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
##########################################################
|
||||
#### Program Startup #####################################
|
||||
##########################################################
|
||||
|
||||
# This part of the program runs at startup,
|
||||
# and parses input of from the user, and then
|
||||
# starts up the desired program mode.
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
parser = argparse.ArgumentParser(description="Simple channel example")
|
||||
|
||||
parser.add_argument(
|
||||
"-s",
|
||||
"--server",
|
||||
action="store_true",
|
||||
help="wait for incoming link requests from clients"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--config",
|
||||
action="store",
|
||||
default=None,
|
||||
help="path to alternative Reticulum config directory",
|
||||
type=str
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"destination",
|
||||
nargs="?",
|
||||
default=None,
|
||||
help="hexadecimal hash of the server destination",
|
||||
type=str
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.config:
|
||||
configarg = args.config
|
||||
else:
|
||||
configarg = None
|
||||
|
||||
if args.server:
|
||||
server(configarg)
|
||||
else:
|
||||
if (args.destination == None):
|
||||
print("")
|
||||
parser.print_help()
|
||||
print("")
|
||||
else:
|
||||
client(args.destination, configarg)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
exit()
|
|
@ -5,6 +5,7 @@
|
|||
# of the packet. #
|
||||
##########################################################
|
||||
|
||||
import os
|
||||
import argparse
|
||||
import RNS
|
||||
|
||||
|
@ -27,8 +28,19 @@ def server(configpath):
|
|||
# We must first initialise Reticulum
|
||||
reticulum = RNS.Reticulum(configpath)
|
||||
|
||||
# Randomly create a new identity for our echo server
|
||||
server_identity = RNS.Identity()
|
||||
# Load identity from file if it exist or randomley create
|
||||
if configpath:
|
||||
ifilepath = "%s/storage/identitiesy/%s" % (configpath,APP_NAME)
|
||||
else:
|
||||
ifilepath = "%s/storage/identities/%s" % (RNS.Reticulum.configdir,APP_NAME)
|
||||
if os.path.exists(ifilepath):
|
||||
# Load identity from file
|
||||
server_identity = RNS.Identity.from_file(ifilepath)
|
||||
RNS.log("loaded identity from file: "+ifilepath, RNS.LOG_VERBOSE)
|
||||
else:
|
||||
# Randomly create a new identity for our echo example
|
||||
server_identity = RNS.Identity()
|
||||
RNS.log("created new identity", RNS.LOG_VERBOSE)
|
||||
|
||||
# We create a destination that clients can query. We want
|
||||
# to be able to verify echo replies to our clients, so we
|
||||
|
@ -46,7 +58,7 @@ def server(configpath):
|
|||
)
|
||||
|
||||
# We configure the destination to automatically prove all
|
||||
# packets adressed to it. By doing this, RNS will automatically
|
||||
# packets addressed to it. By doing this, RNS will automatically
|
||||
# generate a proof for each incoming packet and transmit it
|
||||
# back to the sender of that packet.
|
||||
echo_destination.set_proof_strategy(RNS.Destination.PROVE_ALL)
|
||||
|
@ -120,14 +132,16 @@ def client(destination_hexhash, configpath, timeout=None):
|
|||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
if len(destination_hexhash) != 20:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be 20 hexadecimal characters (10 bytes)"
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
except Exception as e:
|
||||
RNS.log("Invalid destination entered. Check your input!")
|
||||
RNS.log(str(e)+"\n")
|
||||
exit()
|
||||
|
||||
# We must first initialise Reticulum
|
||||
|
@ -208,6 +222,7 @@ def client(destination_hexhash, configpath, timeout=None):
|
|||
# If we do not know this destination, tell the
|
||||
# user to wait for an announce to arrive.
|
||||
RNS.log("Destination is not yet known. Requesting path...")
|
||||
RNS.log("Hit enter to manually retry once an announce is received.")
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
|
||||
# This function is called when our reply destination
|
||||
|
@ -325,4 +340,4 @@ if __name__ == "__main__":
|
|||
client(args.destination, configarg, timeout=timeoutarg)
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
exit()
|
||||
exit()
|
||||
|
|
|
@ -215,8 +215,12 @@ def client(destination_hexhash, configpath):
|
|||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
if len(destination_hexhash) != 20:
|
||||
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
|
@ -445,8 +449,7 @@ def link_established(link):
|
|||
# And set up a small job to check for
|
||||
# a potential timeout in receiving the
|
||||
# file list
|
||||
thread = threading.Thread(target=filelist_timeout_job)
|
||||
thread.setDaemon(True)
|
||||
thread = threading.Thread(target=filelist_timeout_job, daemon=True)
|
||||
thread.start()
|
||||
|
||||
# This job just sleeps for the specified
|
||||
|
|
|
@ -84,7 +84,7 @@ def client_connected(link):
|
|||
def client_disconnected(link):
|
||||
RNS.log("Client disconnected")
|
||||
|
||||
def remote_identified(identity):
|
||||
def remote_identified(link, identity):
|
||||
RNS.log("Remote identified as: "+str(identity))
|
||||
|
||||
def server_packet_received(message, packet):
|
||||
|
@ -124,8 +124,12 @@ def client(destination_hexhash, configpath):
|
|||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
if len(destination_hexhash) != 20:
|
||||
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
|
|
|
@ -28,8 +28,20 @@ def server(configpath):
|
|||
# We must first initialise Reticulum
|
||||
reticulum = RNS.Reticulum(configpath)
|
||||
|
||||
# Randomly create a new identity for our link example
|
||||
server_identity = RNS.Identity()
|
||||
# Load identity from file if it exist or randomley create
|
||||
if configpath:
|
||||
ifilepath = "%s/storage/identitiesy/%s" % (configpath,APP_NAME)
|
||||
else:
|
||||
ifilepath = "%s/storage/identities/%s" % (RNS.Reticulum.configdir,APP_NAME)
|
||||
RNS.log("ifilepath: %s" % ifilepath)
|
||||
if os.path.exists(ifilepath):
|
||||
# Load identity from file
|
||||
server_identity = RNS.Identity.from_file(ifilepath)
|
||||
RNS.log("loaded identity from file: "+ifilepath, RNS.LOG_VERBOSE)
|
||||
else:
|
||||
# Randomly create a new identity for our link example
|
||||
server_identity = RNS.Identity()
|
||||
RNS.log("created new identity", RNS.LOG_VERBOSE)
|
||||
|
||||
# We create a destination that clients can connect to. We
|
||||
# want clients to create links to this destination, so we
|
||||
|
@ -110,8 +122,12 @@ def client(destination_hexhash, configpath):
|
|||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
if len(destination_hexhash) != 20:
|
||||
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
|
@ -284,4 +300,4 @@ if __name__ == "__main__":
|
|||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
exit()
|
||||
exit()
|
||||
|
|
|
@ -25,7 +25,7 @@ def program_setup(configpath):
|
|||
# Destinations are endpoints in Reticulum, that can be addressed
|
||||
# and communicated with. Destinations can also announce their
|
||||
# existence, which will let the network know they are reachable
|
||||
# and autoomatically create paths to them, from anywhere else
|
||||
# and automatically create paths to them, from anywhere else
|
||||
# in the network.
|
||||
destination = RNS.Destination(
|
||||
identity,
|
||||
|
@ -36,7 +36,7 @@ def program_setup(configpath):
|
|||
)
|
||||
|
||||
# We configure the destination to automatically prove all
|
||||
# packets adressed to it. By doing this, RNS will automatically
|
||||
# packets addressed to it. By doing this, RNS will automatically
|
||||
# generate a proof for each incoming packet and transmit it
|
||||
# back to the sender of that packet. This will let anyone that
|
||||
# tries to communicate with the destination know whether their
|
||||
|
|
|
@ -23,8 +23,8 @@ APP_NAME = "example_utilities"
|
|||
# A reference to the latest client link that connected
|
||||
latest_client_link = None
|
||||
|
||||
def random_text_generator(path, data, request_id, remote_identity, requested_at):
|
||||
RNS.log("Generating response to request "+RNS.prettyhexrep(request_id))
|
||||
def random_text_generator(path, data, request_id, link_id, remote_identity, requested_at):
|
||||
RNS.log("Generating response to request "+RNS.prettyhexrep(request_id)+" on link "+RNS.prettyhexrep(link_id))
|
||||
texts = ["They looked up", "On each full moon", "Becky was upset", "I’ll stay away from it", "The pet shop stocks everything"]
|
||||
return texts[random.randint(0, len(texts)-1)]
|
||||
|
||||
|
@ -110,8 +110,12 @@ def client(destination_hexhash, configpath):
|
|||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
if len(destination_hexhash) != 20:
|
||||
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
|
|
|
@ -166,8 +166,12 @@ def client(destination_hexhash, configpath):
|
|||
# We need a binary representation of the destination
|
||||
# hash that was entered on the command line
|
||||
try:
|
||||
if len(destination_hexhash) != 20:
|
||||
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError(
|
||||
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
|
||||
)
|
||||
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except:
|
||||
RNS.log("Invalid destination entered. Check your input!\n")
|
||||
|
|
|
@ -0,0 +1,2 @@
|
|||
ko_fi: markqvist
|
||||
custom: "https://unsigned.io/donate"
|
2
LICENSE
|
@ -1,6 +1,6 @@
|
|||
MIT License, unless otherwise noted
|
||||
|
||||
Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
Copyright (c) 2016-2024 Mark Qvist / unsigned.io
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
42
Makefile
|
@ -1,9 +1,28 @@
|
|||
all: release
|
||||
|
||||
test:
|
||||
@echo Running tests...
|
||||
python -m tests.all
|
||||
|
||||
clean:
|
||||
@echo Cleaning...
|
||||
-rm -r ./build
|
||||
-rm -r ./dist
|
||||
@-rm -rf ./build
|
||||
@-rm -rf ./dist
|
||||
@-rm -rf ./*.data
|
||||
@-rm -rf ./__pycache__
|
||||
@-rm -rf ./RNS/__pycache__
|
||||
@-rm -rf ./RNS/Cryptography/__pycache__
|
||||
@-rm -rf ./RNS/Cryptography/aes/__pycache__
|
||||
@-rm -rf ./RNS/Cryptography/pure25519/__pycache__
|
||||
@-rm -rf ./RNS/Interfaces/__pycache__
|
||||
@-rm -rf ./RNS/Utilities/__pycache__
|
||||
@-rm -rf ./RNS/vendor/__pycache__
|
||||
@-rm -rf ./RNS/vendor/i2plib/__pycache__
|
||||
@-rm -rf ./tests/__pycache__
|
||||
@-rm -rf ./tests/rnsconfig/storage
|
||||
@-rm -rf ./*.egg-info
|
||||
@make -C docs clean
|
||||
@echo Done
|
||||
|
||||
remove_symlinks:
|
||||
@echo Removing symlinks for build...
|
||||
|
@ -15,11 +34,28 @@ create_symlinks:
|
|||
-ln -s ../RNS ./Examples/
|
||||
-ln -s ../../RNS ./RNS/Utilities/
|
||||
|
||||
build_sdist_only:
|
||||
python3 setup.py sdist
|
||||
|
||||
build_wheel:
|
||||
python3 setup.py sdist bdist_wheel
|
||||
|
||||
release: remove_symlinks build_wheel create_symlinks
|
||||
build_pure_wheel:
|
||||
python3 setup.py sdist bdist_wheel --pure
|
||||
|
||||
documentation:
|
||||
make -C docs html
|
||||
|
||||
manual:
|
||||
make -C docs latexpdf epub
|
||||
|
||||
release: test remove_symlinks build_wheel build_pure_wheel documentation manual create_symlinks
|
||||
|
||||
debug: remove_symlinks build_wheel build_pure_wheel create_symlinks
|
||||
|
||||
upload:
|
||||
@echo Ready to publish release, hit enter to continue
|
||||
@read VOID
|
||||
@echo Uploading to PyPi...
|
||||
twine upload dist/*
|
||||
@echo Release published
|
||||
|
|
363
README.md
|
@ -1,32 +1,51 @@
|
|||
Reticulum Network Stack β
|
||||
Reticulum Network Stack β <img align="right" src="https://static.pepy.tech/personalized-badge/rns?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Installs"/>
|
||||
==========
|
||||
|
||||
<p align="center"><img width="200" src="https://unsigned.io/wp-content/uploads/2022/03/reticulum_logo_512.png"></p>
|
||||
<p align="center"><img width="200" src="https://raw.githubusercontent.com/markqvist/Reticulum/master/docs/source/graphics/rns_logo_512.png"></p>
|
||||
|
||||
Reticulum is the cryptography-based networking stack for wide-area networks built on readily available hardware. It can operate even with very high latency and extremely low bandwidth. Reticulum allows you to build wide-area networks with off-the-shelf tools, and offers end-to-end encryption and connectivity, initiator anonymity, autoconfiguring cryptographically backed multi-hop transport, efficient addressing, unforgeable delivery acknowledgements and more.
|
||||
Reticulum is the cryptography-based networking stack for building local and wide-area
|
||||
networks with readily available hardware. It can operate even with very high latency
|
||||
and extremely low bandwidth. Reticulum allows you to build wide-area networks
|
||||
with off-the-shelf tools, and offers end-to-end encryption and connectivity,
|
||||
initiator anonymity, autoconfiguring cryptographically backed multi-hop
|
||||
transport, efficient addressing, unforgeable delivery acknowledgements and
|
||||
more.
|
||||
|
||||
The vision of Reticulum is to allow anyone to be their own network operator, and to make it cheap and easy to cover vast areas with a myriad of independent, interconnectable and autonomous networks. Reticulum **is not** *one* network. It is **a tool** for building *thousands of networks*. Networks without kill-switches, surveillance, censorship and control. Networks that can freely interoperate, associate and disassociate with each other, and require no central oversight. Networks for human beings. *Networks for the people*.
|
||||
The vision of Reticulum is to allow anyone to be their own network operator,
|
||||
and to make it cheap and easy to cover vast areas with a myriad of independent,
|
||||
inter-connectable and autonomous networks. Reticulum **is not** *one* network.
|
||||
It is **a tool** for building *thousands of networks*. Networks without
|
||||
kill-switches, surveillance, censorship and control. Networks that can freely
|
||||
interoperate, associate and disassociate with each other, and require no
|
||||
central oversight. Networks for human beings. *Networks for the people*.
|
||||
|
||||
Reticulum is a complete networking stack, and does not rely on IP or higher layers, but it is possible to use IP as the underlying carrier for Reticulum. It is therefore trivial to tunnel Reticulum over the Internet or private IP networks.
|
||||
Reticulum is a complete networking stack, and does not rely on IP or higher
|
||||
layers, but it is possible to use IP as the underlying carrier for Reticulum.
|
||||
It is therefore trivial to tunnel Reticulum over the Internet or private IP
|
||||
networks.
|
||||
|
||||
Having no dependencies on traditional networking stacks free up overhead that has been utilised to implement a networking stack built directly on cryptographic principles, allowing resilience and stable functionality in open and trustless networks.
|
||||
Having no dependencies on traditional networking stacks frees up overhead that
|
||||
has been used to implement a networking stack built directly on cryptographic
|
||||
principles, allowing resilience and stable functionality, even in open and
|
||||
trustless networks.
|
||||
|
||||
No kernel modules or drivers are required. Reticulum runs completely in userland, and can run on practically any system that runs Python 3.
|
||||
No kernel modules or drivers are required. Reticulum runs completely in
|
||||
userland, and can run on practically any system that runs Python 3.
|
||||
|
||||
## Read The Manual
|
||||
The full documentation for Reticulum is available at [markqvist.github.io/Reticulum/manual/](https://markqvist.github.io/Reticulum/manual/).
|
||||
|
||||
You can also [download the Reticulum manual as a PDF](https://github.com/markqvist/Reticulum/raw/master/docs/Reticulum%20Manual.pdf)
|
||||
You can also download the [Reticulum manual as a PDF](https://github.com/markqvist/Reticulum/raw/master/docs/Reticulum%20Manual.pdf) or [as an e-book in EPUB format](https://github.com/markqvist/Reticulum/raw/master/docs/Reticulum%20Manual.epub).
|
||||
|
||||
For more info, see [unsigned.io/projects/reticulum](https://unsigned.io/projects/reticulum/)
|
||||
For more info, see [reticulum.network](https://reticulum.network/)
|
||||
|
||||
## Notable Features
|
||||
- Coordination-less globally unique adressing and identification
|
||||
- Coordination-less globally unique addressing and identification
|
||||
- Fully self-configuring multi-hop routing
|
||||
- Complete initiator anonymity, communicate without revealing your identity
|
||||
- Initiator anonymity, communicate without revealing your identity
|
||||
- Asymmetric X25519 encryption and Ed25519 signatures as a basis for all communication
|
||||
- Forward Secrecy with ephemereal Elliptic Curve Diffie-Hellman keys on Curve25519
|
||||
- Reticulum uses the [Fernet](https://github.com/fernet/spec/blob/master/Spec.md) specification for on-the-wire / over-the-air encryption
|
||||
- Forward Secrecy with ephemeral Elliptic Curve Diffie-Hellman keys on Curve25519
|
||||
- Reticulum uses the following format for encrypted tokens:
|
||||
- Keys are ephemeral and derived from an ECDH key exchange on Curve25519
|
||||
- AES-128 in CBC mode with PKCS7 padding
|
||||
- HMAC using SHA256 for authentication
|
||||
|
@ -34,70 +53,130 @@ For more info, see [unsigned.io/projects/reticulum](https://unsigned.io/projects
|
|||
- Unforgeable packet delivery confirmations
|
||||
- A variety of supported interface types
|
||||
- An intuitive and easy-to-use API
|
||||
- Reliable and efficient transfer of arbritrary amounts of data
|
||||
- Reliable and efficient transfer of arbitrary amounts of data
|
||||
- Reticulum can handle a few bytes of data or files of many gigabytes
|
||||
- Sequencing, transfer coordination and checksumming is automatic
|
||||
- Sequencing, transfer coordination and checksumming are automatic
|
||||
- The API is very easy to use, and provides transfer progress
|
||||
- Lightweight, flexible and expandable Request/Response mechanism
|
||||
- Efficient link establishment
|
||||
- Total bandwidth cost of setting up an encrypted link is 3 packets totalling 237 bytes
|
||||
- Low cost of keeping links open at only 0.62 bits per second
|
||||
- Total cost of setting up an encrypted and verified link is only 3 packets, totalling 297 bytes
|
||||
- Low cost of keeping links open at only 0.44 bits per second
|
||||
- Reliable sequential delivery with Channel and Buffer mechanisms
|
||||
|
||||
## Roadmap
|
||||
While Reticulum is already a fully featured and functional networking stack,
|
||||
many improvements and additions are actively being worked on, and planned for the future.
|
||||
|
||||
To learn more about the direction and future of Reticulum, please see the [Development Roadmap](./Roadmap.md).
|
||||
|
||||
## Examples of Reticulum Applications
|
||||
If you want to quickly get an idea of what Reticulum can do, take a look at the following resources.
|
||||
If you want to quickly get an idea of what Reticulum can do, take a look at the
|
||||
following resources.
|
||||
|
||||
- [LXMF](https://github.com/markqvist/lxmf) is a distributed, delay and disruption tolerant message transfer protocol built on Reticulum
|
||||
- You can use the [rnsh](https://github.com/acehoss/rnsh) program to establish remote shell sessions over Reticulum.
|
||||
- For an off-grid, encrypted and resilient mesh communications platform, see [Nomad Network](https://github.com/markqvist/NomadNet)
|
||||
- The Android, Linux and macOS app [Sideband](https://unsigned.io/sideband) has a graphical interface and focuses on ease of use.
|
||||
- The Android, Linux and macOS app [Sideband](https://github.com/markqvist/Sideband) has a graphical interface and focuses on ease of use.
|
||||
- [LXMF](https://github.com/markqvist/lxmf) is a distributed, delay and disruption tolerant message transfer protocol built on Reticulum
|
||||
|
||||
## Where can Reticulum be used?
|
||||
Over practically any medium that can support at least a half-duplex channel with 500 bits per second throughput, and an MTU of 500 bytes. Data radios, modems, LoRa radios, serial lines, AX.25 TNCs, amateur radio digital modes, ad-hoc WiFi, free-space optical links and similar systems are all examples of the types of interfaces Reticulum was designed for.
|
||||
Over practically any medium that can support at least a half-duplex channel
|
||||
with greater throughput than 5 bits per second, and an MTU of 500 bytes. Data radios,
|
||||
modems, LoRa radios, serial lines, AX.25 TNCs, amateur radio digital modes,
|
||||
WiFi and Ethernet devices, free-space optical links, and similar systems are
|
||||
all examples of the types of physical devices Reticulum can use.
|
||||
|
||||
An open-source LoRa-based interface called [RNode](https://unsigned.io/projects/rnode/) has been designed specifically for use with Reticulum. It is possible to build yourself, or it can be purchased as a complete transceiver that just needs a USB connection to the host.
|
||||
An open-source LoRa-based interface called
|
||||
[RNode](https://markqvist.github.io/Reticulum/manual/hardware.html#rnode) has
|
||||
been designed specifically for use with Reticulum. It is possible to build
|
||||
yourself, or it can be purchased as a complete transceiver that just needs a
|
||||
USB connection to the host.
|
||||
|
||||
Reticulum can also be encapsulated over existing IP networks, so there's nothing stopping you from using it over wired ethernet or your local WiFi network, where it'll work just as well. In fact, one of the strengths of Reticulum is how easily it allows you to connect different mediums into a self-configuring, resilient and encrypted mesh.
|
||||
Reticulum can also be encapsulated over existing IP networks, so there's
|
||||
nothing stopping you from using it over wired Ethernet, your local WiFi network
|
||||
or the Internet, where it'll work just as well. In fact, one of the strengths
|
||||
of Reticulum is how easily it allows you to connect different mediums into a
|
||||
self-configuring, resilient and encrypted mesh, using any available mixture of
|
||||
available infrastructure.
|
||||
|
||||
As an example, it's possible to set up a Raspberry Pi connected to both a LoRa radio, a packet radio TNC and a WiFi network. Once the interfaces are configured, Reticulum will take care of the rest, and any device on the WiFi network can communicate with nodes on the LoRa and packet radio sides of the network, and vice versa.
|
||||
As an example, it's possible to set up a Raspberry Pi connected to both a LoRa
|
||||
radio, a packet radio TNC and a WiFi network. Once the interfaces are
|
||||
configured, Reticulum will take care of the rest, and any device on the WiFi
|
||||
network can communicate with nodes on the LoRa and packet radio sides of the
|
||||
network, and vice versa.
|
||||
|
||||
## How do I get started?
|
||||
The best way to get started with the Reticulum Network Stack depends on what
|
||||
you want to do. For full details and examples, have a look at the [Getting Started Fast](https://markqvist.github.io/Reticulum/manual/gettingstartedfast.html) section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
|
||||
you want to do. For full details and examples, have a look at the
|
||||
[Getting Started Fast](https://markqvist.github.io/Reticulum/manual/gettingstartedfast.html)
|
||||
section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
|
||||
|
||||
To simply install Reticulum and related utilities on your system, the easiest way is via pip:
|
||||
To simply install Reticulum and related utilities on your system, the easiest way is via `pip`.
|
||||
You can then start any program that uses Reticulum, or start Reticulum as a system service with
|
||||
[the rnsd utility](https://markqvist.github.io/Reticulum/manual/using.html#the-rnsd-utility).
|
||||
|
||||
```bash
|
||||
pip3 install rns
|
||||
pip install rns
|
||||
```
|
||||
|
||||
You can then start any program that uses Reticulum, or start Reticulum as a system service with [the rnsd utility](https://markqvist.github.io/Reticulum/manual/using.html#the-rnsd-utility).
|
||||
If you are using an operating system that blocks normal user package installation via `pip`,
|
||||
you can return `pip` to normal behaviour by editing the `~/.config/pip/pip.conf` file,
|
||||
and adding the following directive in the `[global]` section:
|
||||
|
||||
When first started, Reticulum will create a default configuration file, providing basic connectivity to other Reticulum peers. The default config file contains examples for using Reticulum with LoRa transceivers (specifically [RNode](https://unsigned.io/projects/rnode/)), packet radio TNCs/modems, TCP and UDP.
|
||||
```text
|
||||
[global]
|
||||
break-system-packages = true
|
||||
```
|
||||
|
||||
You can use the examples in the config file to expand communication over many mediums such as packet radio or LoRa (with [RNode](https://unsigned.io/projects/rnode/)), serial ports, or over fast IP links and the Internet using the UDP and TCP interfaces. For more detailed examples, take a look at the [Supported Interfaces](https://markqvist.github.io/Reticulum/manual/interfaces.html) section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
|
||||
Alternatively, you can use the `pipx` tool to install Reticulum in an isolated environment:
|
||||
|
||||
```bash
|
||||
pipx install rns
|
||||
```
|
||||
|
||||
When first started, Reticulum will create a default configuration file,
|
||||
providing basic connectivity to other Reticulum peers that might be locally
|
||||
reachable. The default config file contains a few examples, and references for
|
||||
creating a more complex configuration.
|
||||
|
||||
If you have an old version of `pip` on your system, you may need to upgrade it first with `pip install pip --upgrade`. If you no not already have `pip` installed, you can install it using the package manager of your system with `sudo apt install python3-pip` or similar.
|
||||
|
||||
For more detailed examples on how to expand communication over many mediums such
|
||||
as packet radio or LoRa, serial ports, or over fast IP links and the Internet using
|
||||
the UDP and TCP interfaces, take a look at the [Supported Interfaces](https://markqvist.github.io/Reticulum/manual/interfaces.html)
|
||||
section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
|
||||
|
||||
## Included Utilities
|
||||
Reticulum includes a range of useful utilities for managing your networks, viewing status and information, and other tasks. You can read more about these programs in the [Included Utility Programs](https://markqvist.github.io/Reticulum/manual/using.html#included-utility-programs) section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
|
||||
Reticulum includes a range of useful utilities for managing your networks,
|
||||
viewing status and information, and other tasks. You can read more about these
|
||||
programs in the [Included Utility Programs](https://markqvist.github.io/Reticulum/manual/using.html#included-utility-programs)
|
||||
section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
|
||||
|
||||
- The system daemon `rnsd` for running Reticulum as an always-available service
|
||||
- An interface status utility called `rnstatus`, that displays information about interfaces
|
||||
- The path lookup and and management tool `rnpath` letting you view and modify path tables
|
||||
- The path lookup and management tool `rnpath` letting you view and modify path tables
|
||||
- A diagnostics tool called `rnprobe` for checking connectivity to destinations
|
||||
- A simple file transfer program called `rncp` making easy to copy files to remote systems
|
||||
- The remote command execution program `rnx` that let's you run commands and programs and retrieve output from remote systems
|
||||
- A simple file transfer program called `rncp` making it easy to transfer files between systems
|
||||
- The identity management and encryption utility `rnid` let's you manage Identities and encrypt/decrypt files
|
||||
- The remote command execution program `rnx` let's you run commands and
|
||||
programs and retrieve output from remote systems
|
||||
|
||||
All tools, including `rnx` and `rncp`, work reliably and well even over very low-bandwidth links like LoRa or Packet Radio.
|
||||
|
||||
## Current Status
|
||||
Reticulum should currently be considered beta software. All core protocol features are implemented and functioning, but additions will probably occur as real-world use is explored. There will be bugs. The API and wire-format can be considered relatively stable at the moment, but could change if warranted.
|
||||
All tools, including `rnx` and `rncp`, work reliably and well even over very
|
||||
low-bandwidth links like LoRa or Packet Radio. For full-featured remote shells
|
||||
over Reticulum, also have a look at the [rnsh](https://github.com/acehoss/rnsh)
|
||||
program.
|
||||
|
||||
## Supported interface types and devices
|
||||
|
||||
Reticulum implements a range of generalised interface types that covers most of the communications hardware that Reticulum can run over. If your hardware is not supported, it's relatively simple to implement an interface class. I will gratefully accept pull requests for custom interfaces if they are generally useful.
|
||||
Reticulum implements a range of generalised interface types that covers most of
|
||||
the communications hardware that Reticulum can run over. If your hardware is
|
||||
not supported, it's relatively simple to implement an interface class. I will
|
||||
gratefully accept pull requests for custom interfaces if they are generally
|
||||
useful.
|
||||
|
||||
Currently, the following interfaces are supported:
|
||||
|
||||
- Any ethernet device
|
||||
- LoRa using [RNode](https://unsigned.io/projects/rnode/)
|
||||
- Any Ethernet device
|
||||
- LoRa using [RNode](https://unsigned.io/rnode/)
|
||||
- Packet Radio TNCs (with or without AX.25)
|
||||
- KISS-compatible hardware and software modems
|
||||
- Any device with a serial port
|
||||
|
@ -106,66 +185,86 @@ Currently, the following interfaces are supported:
|
|||
- External programs via stdio or pipes
|
||||
- Custom hardware via stdio or pipes
|
||||
|
||||
## Development Roadmap
|
||||
- Version 0.3.8
|
||||
- Improving [the manual](https://markqvist.github.io/Reticulum/manual/) with sections specifically for beginners
|
||||
- Utilities for managing identities, signing and encryption
|
||||
- Support for radio and modem interfaces on Android
|
||||
- User friendly interface configuration tool
|
||||
- Easy way to share interface configurations, see [#19](https://github.com/markqvist/Reticulum/discussions/19)
|
||||
- More interface types for even broader compatibility
|
||||
- Plain ESP32 devices (ESP-Now, WiFi, Bluetooth, etc.)
|
||||
- More LoRa transceivers
|
||||
- AT-compatible modems
|
||||
- IR Transceivers
|
||||
- AWDL / OWL
|
||||
- HF Modems
|
||||
- CAN-bus
|
||||
- ZeroMQ
|
||||
- MQTT
|
||||
- IrDA / IrPHY
|
||||
- SPI
|
||||
- i²c
|
||||
- Version 0.3.9
|
||||
- A portable cryptography core, supporting multiple backends
|
||||
- Performance optimisations
|
||||
- Memory optimisations
|
||||
- Planned, but not yet scheduled
|
||||
- Globally routable multicast
|
||||
- Bindings for other programming languages
|
||||
- A portable Reticulum implementation in C, see [#21](https://github.com/markqvist/Reticulum/discussions/21)
|
||||
## Performance
|
||||
Reticulum targets a *very* wide usable performance envelope, but prioritises
|
||||
functionality and performance on low-bandwidth mediums. The goal is to
|
||||
provide a dynamic performance envelope from 250 bits per second, to 1 gigabit
|
||||
per second on normal hardware.
|
||||
|
||||
Currently, the usable performance envelope is approximately 150 bits per second
|
||||
to 40 megabits per second, with physical mediums faster than that not being
|
||||
saturated. Performance beyond the current level is intended for future
|
||||
upgrades, but not highly prioritised at this point in time.
|
||||
|
||||
## Current Status
|
||||
Reticulum should currently be considered beta software. All core protocol
|
||||
features are implemented and functioning, but additions will probably occur as
|
||||
real-world use is explored. There will be bugs. The API and wire-format can be
|
||||
considered relatively stable at the moment, but could change if warranted.
|
||||
|
||||
## Dependencies:
|
||||
- Python 3.6
|
||||
- cryptography.io
|
||||
- netifaces
|
||||
- pyserial
|
||||
## Dependencies
|
||||
The installation of the default `rns` package requires the dependencies listed
|
||||
below. Almost all systems and distributions have readily available packages for
|
||||
these dependencies, and when the `rns` package is installed with `pip`, they
|
||||
will be downloaded and installed as well.
|
||||
|
||||
- [PyCA/cryptography](https://github.com/pyca/cryptography)
|
||||
- [pyserial](https://github.com/pyserial/pyserial)
|
||||
|
||||
On more unusual systems, and in some rare cases, it might not be possible to
|
||||
install or even compile one or more of the above modules. In such situations,
|
||||
you can use the `rnspure` package instead, which require no external
|
||||
dependencies for installation. Please note that the contents of the `rns` and
|
||||
`rnspure` packages are *identical*. The only difference is that the `rnspure`
|
||||
package lists no dependencies required for installation.
|
||||
|
||||
No matter how Reticulum is installed and started, it will load external
|
||||
dependencies only if they are *needed* and *available*. If for example you want
|
||||
to use Reticulum on a system that cannot support
|
||||
[pyserial](https://github.com/pyserial/pyserial), it is perfectly possible to
|
||||
do so using the `rnspure` package, but Reticulum will not be able to use
|
||||
serial-based interfaces. All other available modules will still be loaded when
|
||||
needed.
|
||||
|
||||
**Please Note!** If you use the `rnspure` package to run Reticulum on systems
|
||||
that do not support [PyCA/cryptography](https://github.com/pyca/cryptography),
|
||||
it is important that you read and understand the [Cryptographic
|
||||
Primitives](#cryptographic-primitives) section of this document.
|
||||
|
||||
## Public Testnet
|
||||
If you just want to get started experimenting without building any physical
|
||||
networks, you are welcome to join the Unsigned.io RNS Testnet. The testnet is
|
||||
just that, an informal network for testing and experimenting. It will be up
|
||||
most of the time, and anyone can join, but it also means that there's no
|
||||
guarantees for service availability.
|
||||
|
||||
If you just want to get started experimenting without building any physical networks, you are welcome to join the Unsigned.io RNS Testnet. The testnet is just that, an informal network for testing and experimenting. It will be up most of the time, and anyone can join, but it also means that there's no guarantees for service availability.
|
||||
|
||||
The testnet runs the very latest version of Reticulum (often even a short while before it is publicly released). Sometimes experimental versions of Reticulum might be deployed to nodes on the testnet, which means strange behaviour might occur. If none of that scares you, you can join the testnet via eihter TCP or I2P. Just add one of the following interfaces to your Reticulum configuration file:
|
||||
The testnet runs the very latest version of Reticulum (often even a short while
|
||||
before it is publicly released). Sometimes experimental versions of Reticulum
|
||||
might be deployed to nodes on the testnet, which means strange behaviour might
|
||||
occur. If none of that scares you, you can join the testnet via either TCP or
|
||||
I2P. Just add one of the following interfaces to your Reticulum configuration
|
||||
file:
|
||||
|
||||
```
|
||||
# For connecting over TCP/IP:
|
||||
|
||||
[[RNS Testnet Frankfurt]]
|
||||
# TCP/IP interface to the RNS Amsterdam Hub
|
||||
[[RNS Testnet Amsterdam]]
|
||||
type = TCPClientInterface
|
||||
interface_enabled = yes
|
||||
outgoing = True
|
||||
target_host = frankfurt.rns.unsigned.io
|
||||
enabled = yes
|
||||
target_host = amsterdam.connect.reticulum.network
|
||||
target_port = 4965
|
||||
|
||||
# TCP/IP interface to the BetweenTheBorders Hub (community-provided)
|
||||
[[RNS Testnet BetweenTheBorders]]
|
||||
type = TCPClientInterface
|
||||
enabled = yes
|
||||
target_host = betweentheborders.com
|
||||
target_port = 4242
|
||||
|
||||
# For connecting over I2P:
|
||||
|
||||
[[RNS Testnet I2P Node A]]
|
||||
# Interface to Testnet I2P Hub
|
||||
[[RNS Testnet I2P Hub]]
|
||||
type = I2PInterface
|
||||
interface_enabled = yes
|
||||
peers = ykzlw5ujbaqc2xkec4cpvgyxj257wcrmmgkuxqmqcur7cq3w3lha.b32.i2p
|
||||
enabled = yes
|
||||
peers = g3br23bvx3lq5uddcsjii74xgmn6y5q325ovrkq2zw2wbzbqgbuq.b32.i2p
|
||||
```
|
||||
|
||||
The testnet also contains a number of [Nomad Network](https://github.com/markqvist/nomadnet) nodes, and LXMF propagation nodes.
|
||||
|
@ -173,11 +272,89 @@ The testnet also contains a number of [Nomad Network](https://github.com/markqvi
|
|||
## Support Reticulum
|
||||
You can help support the continued development of open, free and private communications systems by donating via one of the following channels:
|
||||
|
||||
- Ethereum: 0x81F7B979fEa6134bA9FD5c701b3501A2e61E897a
|
||||
- Bitcoin: 3CPmacGm34qYvR6XWLVEJmi2aNe3PZqUuq
|
||||
- Monero:
|
||||
```
|
||||
84FpY1QbxHcgdseePYNmhTHcrgMX4nFfBYtz2GKYToqHVVhJp8Eaw1Z1EedRnKD19b3B8NiLCGVxzKV17UMmmeEsCrPyA5w
|
||||
```
|
||||
- Ethereum
|
||||
```
|
||||
0xFDabC71AC4c0C78C95aDDDe3B4FA19d6273c5E73
|
||||
```
|
||||
- Bitcoin
|
||||
```
|
||||
35G9uWVzrpJJibzUwpNUQGQNFzLirhrYAH
|
||||
```
|
||||
- Ko-Fi: https://ko-fi.com/markqvist
|
||||
|
||||
Are certain features in the development roadmap are important to you or your organisation? Make them a reality quickly by sponsoring their implementation.
|
||||
Are certain features in the development roadmap are important to you or your
|
||||
organisation? Make them a reality quickly by sponsoring their implementation.
|
||||
|
||||
## Caveat Emptor
|
||||
Reticulum is relatively young software, and should be considered as such. While it has been built with cryptography best-practices very foremost in mind, it _has not_ been externally security audited, and there could very well be privacy-breaking bugs. If you want to help out, or help sponsor an audit, please do get in touch.
|
||||
## Cryptographic Primitives
|
||||
Reticulum uses a simple suite of efficient, strong and modern cryptographic
|
||||
primitives, with widely available implementations that can be used both on
|
||||
general-purpose CPUs and on microcontrollers. The necessary primitives are:
|
||||
|
||||
- Ed25519 for signatures
|
||||
- X22519 for ECDH key exchanges
|
||||
- HKDF for key derivation
|
||||
- Modified Fernet for encrypted tokens
|
||||
- AES-128 in CBC mode
|
||||
- HMAC for message authentication
|
||||
- No Fernet version and timestamp fields
|
||||
- SHA-256
|
||||
- SHA-512
|
||||
|
||||
In the default installation configuration, the `X25519`, `Ed25519` and
|
||||
`AES-128-CBC` primitives are provided by [OpenSSL](https://www.openssl.org/)
|
||||
(via the [PyCA/cryptography](https://github.com/pyca/cryptography) package).
|
||||
The hashing functions `SHA-256` and `SHA-512` are provided by the standard
|
||||
Python [hashlib](https://docs.python.org/3/library/hashlib.html). The `HKDF`,
|
||||
`HMAC`, `Fernet` primitives, and the `PKCS7` padding function are always
|
||||
provided by the following internal implementations:
|
||||
|
||||
- [HKDF.py](RNS/Cryptography/HKDF.py)
|
||||
- [HMAC.py](RNS/Cryptography/HMAC.py)
|
||||
- [Fernet.py](RNS/Cryptography/Fernet.py)
|
||||
- [PKCS7.py](RNS/Cryptography/PKCS7.py)
|
||||
|
||||
|
||||
Reticulum also includes a complete implementation of all necessary primitives
|
||||
in pure Python. If OpenSSL & PyCA are not available on the system when
|
||||
Reticulum is started, Reticulum will instead use the internal pure-python
|
||||
primitives. A trivial consequence of this is performance, with the OpenSSL
|
||||
backend being *much* faster. The most important consequence however, is the
|
||||
potential loss of security by using primitives that has not seen the same
|
||||
amount of scrutiny, testing and review as those from OpenSSL.
|
||||
|
||||
If you want to use the internal pure-python primitives, it is **highly
|
||||
advisable** that you have a good understanding of the risks that this pose, and
|
||||
make an informed decision on whether those risks are acceptable to you.
|
||||
|
||||
Reticulum is relatively young software, and should be considered as such. While
|
||||
it has been built with cryptography best-practices very foremost in mind, it
|
||||
_has not_ been externally security audited, and there could very well be
|
||||
privacy or security breaking bugs. If you want to help out, or help sponsor an
|
||||
audit, please do get in touch.
|
||||
|
||||
## Acknowledgements & Credits
|
||||
Reticulum can only exist because of the mountain of Open Source work it was
|
||||
built on top of, the contributions of everyone involved, and everyone that has
|
||||
supported the project through the years. To everyone who has helped, thank you
|
||||
so much.
|
||||
|
||||
A number of other modules and projects are either part of, or used by
|
||||
Reticulum. Sincere thanks to the authors and contributors of the following
|
||||
projects:
|
||||
|
||||
- [PyCA/cryptography](https://github.com/pyca/cryptography), *BSD License*
|
||||
- [Pure-25519](https://github.com/warner/python-pure25519) by [Brian Warner](https://github.com/warner), *MIT License*
|
||||
- [Pysha2](https://github.com/thomdixon/pysha2) by [Thom Dixon](https://github.com/thomdixon), *MIT License*
|
||||
- [Python-AES](https://github.com/orgurar/python-aes) by [Or Gur Arie](https://github.com/orgurar), *MIT License*
|
||||
- [Curve25519.py](https://gist.github.com/nickovs/cc3c22d15f239a2640c185035c06f8a3#file-curve25519-py) by [Nicko van Someren](https://gist.github.com/nickovs), *Public Domain*
|
||||
- [I2Plib](https://github.com/l-n-s/i2plib) by [Viktor Villainov](https://github.com/l-n-s)
|
||||
- [PySerial](https://github.com/pyserial/pyserial) by Chris Liechti, *BSD License*
|
||||
- [Configobj](https://github.com/DiffSK/configobj) by Michael Foord, Nicola Larosa, Rob Dennis & Eli Courtwright, *BSD License*
|
||||
- [Six](https://github.com/benjaminp/six) by [Benjamin Peterson](https://github.com/benjaminp), *MIT License*
|
||||
- [ifaddr](https://github.com/pydron/ifaddr) by [Pydron](https://github.com/pydron), *MIT License*
|
||||
- [Umsgpack.py](https://github.com/vsergeev/u-msgpack-python) by [Ivan A. Sergeev](https://github.com/vsergeev)
|
||||
- [Python](https://www.python.org)
|
||||
|
|
|
@ -0,0 +1,359 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
import bz2
|
||||
import sys
|
||||
import time
|
||||
import threading
|
||||
from threading import RLock
|
||||
import struct
|
||||
from RNS.Channel import Channel, MessageBase, SystemMessageTypes
|
||||
import RNS
|
||||
from io import RawIOBase, BufferedRWPair, BufferedReader, BufferedWriter
|
||||
from typing import Callable
|
||||
from contextlib import AbstractContextManager
|
||||
|
||||
class StreamDataMessage(MessageBase):
|
||||
MSGTYPE = SystemMessageTypes.SMT_STREAM_DATA
|
||||
"""
|
||||
Message type for ``Channel``. ``StreamDataMessage``
|
||||
uses a system-reserved message type.
|
||||
"""
|
||||
|
||||
STREAM_ID_MAX = 0x3fff # 16383
|
||||
"""
|
||||
The stream id is limited to 2 bytes - 2 bit
|
||||
"""
|
||||
|
||||
MAX_DATA_LEN = RNS.Link.MDU - 2 - 6 # 2 for stream data message header, 6 for channel envelope
|
||||
"""
|
||||
When the Buffer package is imported, this value is
|
||||
calculcated based on the value of OVERHEAD
|
||||
"""
|
||||
|
||||
def __init__(self, stream_id: int = None, data: bytes = None, eof: bool = False, compressed: bool = False):
|
||||
"""
|
||||
This class is used to encapsulate binary stream
|
||||
data to be sent over a ``Channel``.
|
||||
|
||||
:param stream_id: id of stream relative to receiver
|
||||
:param data: binary data
|
||||
:param eof: set to True if signalling End of File
|
||||
"""
|
||||
super().__init__()
|
||||
if stream_id is not None and stream_id > self.STREAM_ID_MAX:
|
||||
raise ValueError("stream_id must be 0-16383")
|
||||
self.stream_id = stream_id
|
||||
self.compressed = compressed
|
||||
self.data = data or bytes()
|
||||
self.eof = eof
|
||||
|
||||
def pack(self) -> bytes:
|
||||
if self.stream_id is None:
|
||||
raise ValueError("stream_id")
|
||||
|
||||
header_val = (0x3fff & self.stream_id) | (0x8000 if self.eof else 0x0000) | (0x4000 if self.compressed > 0 else 0x0000)
|
||||
return bytes(struct.pack(">H", header_val) + (self.data if self.data else bytes()))
|
||||
|
||||
def unpack(self, raw):
|
||||
self.stream_id = struct.unpack(">H", raw[:2])[0]
|
||||
self.eof = (0x8000 & self.stream_id) > 0
|
||||
self.compressed = (0x4000 & self.stream_id) > 0
|
||||
self.stream_id = self.stream_id & 0x3fff
|
||||
self.data = raw[2:]
|
||||
|
||||
if self.compressed:
|
||||
self.data = bz2.decompress(self.data)
|
||||
|
||||
|
||||
class RawChannelReader(RawIOBase, AbstractContextManager):
|
||||
"""
|
||||
An implementation of RawIOBase that receives
|
||||
binary stream data sent over a ``Channel``.
|
||||
|
||||
This class generally need not be instantiated directly.
|
||||
Use :func:`RNS.Buffer.create_reader`,
|
||||
:func:`RNS.Buffer.create_writer`, and
|
||||
:func:`RNS.Buffer.create_bidirectional_buffer` functions
|
||||
to create buffered streams with optional callbacks.
|
||||
|
||||
For additional information on the API of this
|
||||
object, see the Python documentation for
|
||||
``RawIOBase``.
|
||||
"""
|
||||
def __init__(self, stream_id: int, channel: Channel):
|
||||
"""
|
||||
Create a raw channel reader.
|
||||
|
||||
:param stream_id: local stream id to receive at
|
||||
:param channel: ``Channel`` object to receive from
|
||||
"""
|
||||
self._stream_id = stream_id
|
||||
self._channel = channel
|
||||
self._lock = RLock()
|
||||
self._buffer = bytearray()
|
||||
self._eof = False
|
||||
self._channel._register_message_type(StreamDataMessage, is_system_type=True)
|
||||
self._channel.add_message_handler(self._handle_message)
|
||||
self._listeners: [Callable[[int], None]] = []
|
||||
|
||||
def add_ready_callback(self, cb: Callable[[int], None]):
|
||||
"""
|
||||
Add a function to be called when new data is available.
|
||||
The function should have the signature ``(ready_bytes: int) -> None``
|
||||
|
||||
:param cb: function to call
|
||||
"""
|
||||
with self._lock:
|
||||
self._listeners.append(cb)
|
||||
|
||||
def remove_ready_callback(self, cb: Callable[[int], None]):
|
||||
"""
|
||||
Remove a function added with :func:`RNS.RawChannelReader.add_ready_callback()`
|
||||
|
||||
:param cb: function to remove
|
||||
"""
|
||||
with self._lock:
|
||||
self._listeners.remove(cb)
|
||||
|
||||
def _handle_message(self, message: MessageBase):
|
||||
if isinstance(message, StreamDataMessage):
|
||||
if message.stream_id == self._stream_id:
|
||||
with self._lock:
|
||||
if message.data is not None:
|
||||
self._buffer.extend(message.data)
|
||||
if message.eof:
|
||||
self._eof = True
|
||||
for listener in self._listeners:
|
||||
try:
|
||||
threading.Thread(target=listener, name="Message Callback", args=[len(self._buffer)], daemon=True).start()
|
||||
except Exception as ex:
|
||||
RNS.log("Error calling RawChannelReader(" + str(self._stream_id) + ") callback: " + str(ex), RNS.LOG_ERROR)
|
||||
return True
|
||||
return False
|
||||
|
||||
def _read(self, __size: int) -> bytes | None:
|
||||
with self._lock:
|
||||
result = self._buffer[:__size]
|
||||
self._buffer = self._buffer[__size:]
|
||||
return result if len(result) > 0 or self._eof else None
|
||||
|
||||
def readinto(self, __buffer: bytearray) -> int | None:
|
||||
ready = self._read(len(__buffer))
|
||||
if ready is not None:
|
||||
__buffer[:len(ready)] = ready
|
||||
return len(ready) if ready is not None else None
|
||||
|
||||
def writable(self) -> bool:
|
||||
return False
|
||||
|
||||
def seekable(self) -> bool:
|
||||
return False
|
||||
|
||||
def readable(self) -> bool:
|
||||
return True
|
||||
|
||||
def close(self):
|
||||
with self._lock:
|
||||
self._channel.remove_message_handler(self._handle_message)
|
||||
self._listeners.clear()
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self.close()
|
||||
return False
|
||||
|
||||
|
||||
class RawChannelWriter(RawIOBase, AbstractContextManager):
|
||||
"""
|
||||
An implementation of RawIOBase that receives
|
||||
binary stream data sent over a channel.
|
||||
|
||||
This class generally need not be instantiated directly.
|
||||
Use :func:`RNS.Buffer.create_reader`,
|
||||
:func:`RNS.Buffer.create_writer`, and
|
||||
:func:`RNS.Buffer.create_bidirectional_buffer` functions
|
||||
to create buffered streams with optional callbacks.
|
||||
|
||||
For additional information on the API of this
|
||||
object, see the Python documentation for
|
||||
``RawIOBase``.
|
||||
"""
|
||||
|
||||
MAX_CHUNK_LEN = 1024*16
|
||||
COMPRESSION_TRIES = 4
|
||||
|
||||
def __init__(self, stream_id: int, channel: Channel):
|
||||
"""
|
||||
Create a raw channel writer.
|
||||
|
||||
:param stream_id: remote stream id to sent do
|
||||
:param channel: ``Channel`` object to send on
|
||||
"""
|
||||
self._stream_id = stream_id
|
||||
self._channel = channel
|
||||
self._eof = False
|
||||
|
||||
def write(self, __b: bytes) -> int | None:
|
||||
try:
|
||||
comp_tries = RawChannelWriter.COMPRESSION_TRIES
|
||||
comp_try = 1
|
||||
comp_success = False
|
||||
chunk_len = len(__b)
|
||||
if chunk_len > RawChannelWriter.MAX_CHUNK_LEN:
|
||||
chunk_len = RawChannelWriter.MAX_CHUNK_LEN
|
||||
__b = __b[:RawChannelWriter.MAX_CHUNK_LEN]
|
||||
chunk_segment = None
|
||||
while chunk_len > 32 and comp_try < comp_tries:
|
||||
chunk_segment_length = int(chunk_len/comp_try)
|
||||
compressed_chunk = bz2.compress(__b[:chunk_segment_length])
|
||||
compressed_length = len(compressed_chunk)
|
||||
if compressed_length < StreamDataMessage.MAX_DATA_LEN and compressed_length < chunk_segment_length:
|
||||
comp_success = True
|
||||
break
|
||||
else:
|
||||
comp_try += 1
|
||||
|
||||
if comp_success:
|
||||
chunk = compressed_chunk
|
||||
processed_length = chunk_segment_length
|
||||
else:
|
||||
chunk = bytes(__b[:StreamDataMessage.MAX_DATA_LEN])
|
||||
processed_length = len(chunk)
|
||||
|
||||
message = StreamDataMessage(self._stream_id, chunk, self._eof, comp_success)
|
||||
|
||||
self._channel.send(message)
|
||||
return processed_length
|
||||
|
||||
except RNS.Channel.ChannelException as cex:
|
||||
if cex.type != RNS.Channel.CEType.ME_LINK_NOT_READY:
|
||||
raise
|
||||
return 0
|
||||
|
||||
def close(self):
|
||||
try:
|
||||
link_rtt = self._channel._outlet.link.rtt
|
||||
timeout = time.time() + (link_rtt * len(self._channel._tx_ring) * 1)
|
||||
except Exception as e:
|
||||
timeout = time.time() + 15
|
||||
|
||||
while time.time() < timeout and not self._channel.is_ready_to_send():
|
||||
time.sleep(0.05)
|
||||
|
||||
self._eof = True
|
||||
self.write(bytes())
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self.close()
|
||||
return False
|
||||
|
||||
def seekable(self) -> bool:
|
||||
return False
|
||||
|
||||
def readable(self) -> bool:
|
||||
return False
|
||||
|
||||
def writable(self) -> bool:
|
||||
return True
|
||||
|
||||
class Buffer:
|
||||
"""
|
||||
Static functions for creating buffered streams that send
|
||||
and receive over a ``Channel``.
|
||||
|
||||
These functions use ``BufferedReader``, ``BufferedWriter``,
|
||||
and ``BufferedRWPair`` to add buffering to
|
||||
``RawChannelReader`` and ``RawChannelWriter``.
|
||||
"""
|
||||
@staticmethod
|
||||
def create_reader(stream_id: int, channel: Channel,
|
||||
ready_callback: Callable[[int], None] | None = None) -> BufferedReader:
|
||||
"""
|
||||
Create a buffered reader that reads binary data sent
|
||||
over a ``Channel``, with an optional callback when
|
||||
new data is available.
|
||||
|
||||
Callback signature: ``(ready_bytes: int) -> None``
|
||||
|
||||
For more information on the reader-specific functions
|
||||
of this object, see the Python documentation for
|
||||
``BufferedReader``
|
||||
|
||||
:param stream_id: the local stream id to receive from
|
||||
:param channel: the channel to receive on
|
||||
:param ready_callback: function to call when new data is available
|
||||
:return: a BufferedReader object
|
||||
"""
|
||||
reader = RawChannelReader(stream_id, channel)
|
||||
if ready_callback:
|
||||
reader.add_ready_callback(ready_callback)
|
||||
return BufferedReader(reader)
|
||||
|
||||
@staticmethod
|
||||
def create_writer(stream_id: int, channel: Channel) -> BufferedWriter:
|
||||
"""
|
||||
Create a buffered writer that writes binary data over
|
||||
a ``Channel``.
|
||||
|
||||
For more information on the writer-specific functions
|
||||
of this object, see the Python documentation for
|
||||
``BufferedWriter``
|
||||
|
||||
:param stream_id: the remote stream id to send to
|
||||
:param channel: the channel to send on
|
||||
:return: a BufferedWriter object
|
||||
"""
|
||||
writer = RawChannelWriter(stream_id, channel)
|
||||
return BufferedWriter(writer)
|
||||
|
||||
@staticmethod
|
||||
def create_bidirectional_buffer(receive_stream_id: int, send_stream_id: int, channel: Channel,
|
||||
ready_callback: Callable[[int], None] | None = None) -> BufferedRWPair:
|
||||
"""
|
||||
Create a buffered reader/writer pair that reads and
|
||||
writes binary data over a ``Channel``, with an
|
||||
optional callback when new data is available.
|
||||
|
||||
Callback signature: ``(ready_bytes: int) -> None``
|
||||
|
||||
For more information on the reader-specific functions
|
||||
of this object, see the Python documentation for
|
||||
``BufferedRWPair``
|
||||
|
||||
:param receive_stream_id: the local stream id to receive at
|
||||
:param send_stream_id: the remote stream id to send to
|
||||
:param channel: the channel to send and receive on
|
||||
:param ready_callback: function to call when new data is available
|
||||
:return: a BufferedRWPair object
|
||||
"""
|
||||
reader = RawChannelReader(receive_stream_id, channel)
|
||||
if ready_callback:
|
||||
reader.add_ready_callback(ready_callback)
|
||||
writer = RawChannelWriter(send_stream_id, channel)
|
||||
return BufferedRWPair(reader, writer)
|
|
@ -0,0 +1,694 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
import collections
|
||||
import enum
|
||||
import threading
|
||||
import time
|
||||
from types import TracebackType
|
||||
from typing import Type, Callable, TypeVar, Generic, NewType
|
||||
import abc
|
||||
import contextlib
|
||||
import struct
|
||||
import RNS
|
||||
from abc import ABC, abstractmethod
|
||||
TPacket = TypeVar("TPacket")
|
||||
|
||||
class SystemMessageTypes(enum.IntEnum):
|
||||
SMT_STREAM_DATA = 0xff00
|
||||
|
||||
class ChannelOutletBase(ABC, Generic[TPacket]):
|
||||
"""
|
||||
An abstract transport layer interface used by Channel.
|
||||
|
||||
DEPRECATED: This was created for testing; eventually
|
||||
Channel will use Link or a LinkBase interface
|
||||
directly.
|
||||
"""
|
||||
@abstractmethod
|
||||
def send(self, raw: bytes) -> TPacket:
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def resend(self, packet: TPacket) -> TPacket:
|
||||
raise NotImplemented()
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def mdu(self):
|
||||
raise NotImplemented()
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def rtt(self):
|
||||
raise NotImplemented()
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_usable(self):
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def get_packet_state(self, packet: TPacket) -> MessageState:
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def timed_out(self):
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def __str__(self):
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def set_packet_timeout_callback(self, packet: TPacket, callback: Callable[[TPacket], None] | None,
|
||||
timeout: float | None = None):
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def set_packet_delivered_callback(self, packet: TPacket, callback: Callable[[TPacket], None] | None):
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def get_packet_id(self, packet: TPacket) -> any:
|
||||
raise NotImplemented()
|
||||
|
||||
|
||||
class CEType(enum.IntEnum):
|
||||
"""
|
||||
ChannelException type codes
|
||||
"""
|
||||
ME_NO_MSG_TYPE = 0
|
||||
ME_INVALID_MSG_TYPE = 1
|
||||
ME_NOT_REGISTERED = 2
|
||||
ME_LINK_NOT_READY = 3
|
||||
ME_ALREADY_SENT = 4
|
||||
ME_TOO_BIG = 5
|
||||
|
||||
|
||||
class ChannelException(Exception):
|
||||
"""
|
||||
An exception thrown by Channel, with a type code.
|
||||
"""
|
||||
def __init__(self, ce_type: CEType, *args):
|
||||
super().__init__(args)
|
||||
self.type = ce_type
|
||||
|
||||
|
||||
class MessageState(enum.IntEnum):
|
||||
"""
|
||||
Set of possible states for a Message
|
||||
"""
|
||||
MSGSTATE_NEW = 0
|
||||
MSGSTATE_SENT = 1
|
||||
MSGSTATE_DELIVERED = 2
|
||||
MSGSTATE_FAILED = 3
|
||||
|
||||
|
||||
class MessageBase(abc.ABC):
|
||||
"""
|
||||
Base type for any messages sent or received on a Channel.
|
||||
Subclasses must define the two abstract methods as well as
|
||||
the ``MSGTYPE`` class variable.
|
||||
"""
|
||||
# MSGTYPE must be unique within all classes sent over a
|
||||
# channel. Additionally, MSGTYPE > 0xf000 are reserved.
|
||||
MSGTYPE = None
|
||||
"""
|
||||
Defines a unique identifier for a message class.
|
||||
|
||||
* Must be unique within all classes registered with a ``Channel``
|
||||
* Must be less than ``0xf000``. Values greater than or equal to ``0xf000`` are reserved.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def pack(self) -> bytes:
|
||||
"""
|
||||
Create and return the binary representation of the message
|
||||
|
||||
:return: binary representation of message
|
||||
"""
|
||||
raise NotImplemented()
|
||||
|
||||
@abstractmethod
|
||||
def unpack(self, raw: bytes):
|
||||
"""
|
||||
Populate message from binary representation
|
||||
|
||||
:param raw: binary representation
|
||||
"""
|
||||
raise NotImplemented()
|
||||
|
||||
|
||||
MessageCallbackType = NewType("MessageCallbackType", Callable[[MessageBase], bool])
|
||||
|
||||
|
||||
class Envelope:
|
||||
"""
|
||||
Internal wrapper used to transport messages over a channel and
|
||||
track its state within the channel framework.
|
||||
"""
|
||||
def unpack(self, message_factories: dict[int, Type]) -> MessageBase:
|
||||
msgtype, self.sequence, length = struct.unpack(">HHH", self.raw[:6])
|
||||
raw = self.raw[6:]
|
||||
ctor = message_factories.get(msgtype, None)
|
||||
if ctor is None:
|
||||
raise ChannelException(CEType.ME_NOT_REGISTERED, f"Unable to find constructor for Channel MSGTYPE {hex(msgtype)}")
|
||||
message = ctor()
|
||||
message.unpack(raw)
|
||||
self.unpacked = True
|
||||
self.message = message
|
||||
|
||||
return message
|
||||
|
||||
def pack(self) -> bytes:
|
||||
if self.message.__class__.MSGTYPE is None:
|
||||
raise ChannelException(CEType.ME_NO_MSG_TYPE, f"{self.message.__class__} lacks MSGTYPE")
|
||||
data = self.message.pack()
|
||||
self.raw = struct.pack(">HHH", self.message.MSGTYPE, self.sequence, len(data)) + data
|
||||
self.packed = True
|
||||
return self.raw
|
||||
|
||||
def __init__(self, outlet: ChannelOutletBase, message: MessageBase = None, raw: bytes = None, sequence: int = None):
|
||||
self.ts = time.time()
|
||||
self.id = id(self)
|
||||
self.message = message
|
||||
self.raw = raw
|
||||
self.packet: TPacket = None
|
||||
self.sequence = sequence
|
||||
self.outlet = outlet
|
||||
self.tries = 0
|
||||
self.unpacked = False
|
||||
self.packed = False
|
||||
self.tracked = False
|
||||
|
||||
|
||||
class Channel(contextlib.AbstractContextManager):
|
||||
"""
|
||||
Provides reliable delivery of messages over
|
||||
a link.
|
||||
|
||||
``Channel`` differs from ``Request`` and
|
||||
``Resource`` in some important ways:
|
||||
|
||||
**Continuous**
|
||||
Messages can be sent or received as long as
|
||||
the ``Link`` is open.
|
||||
**Bi-directional**
|
||||
Messages can be sent in either direction on
|
||||
the ``Link``; neither end is the client or
|
||||
server.
|
||||
**Size-constrained**
|
||||
Messages must be encoded into a single packet.
|
||||
|
||||
``Channel`` is similar to ``Packet``, except that it
|
||||
provides reliable delivery (automatic retries) as well
|
||||
as a structure for exchanging several types of
|
||||
messages over the ``Link``.
|
||||
|
||||
``Channel`` is not instantiated directly, but rather
|
||||
obtained from a ``Link`` with ``get_channel()``.
|
||||
"""
|
||||
|
||||
# The initial window size at channel setup
|
||||
WINDOW = 2
|
||||
|
||||
# Absolute minimum window size
|
||||
WINDOW_MIN = 2
|
||||
WINDOW_MIN_LIMIT_SLOW = 2
|
||||
WINDOW_MIN_LIMIT_MEDIUM = 5
|
||||
WINDOW_MIN_LIMIT_FAST = 16
|
||||
|
||||
# The maximum window size for transfers on slow links
|
||||
WINDOW_MAX_SLOW = 5
|
||||
|
||||
# The maximum window size for transfers on mid-speed links
|
||||
WINDOW_MAX_MEDIUM = 12
|
||||
|
||||
# The maximum window size for transfers on fast links
|
||||
WINDOW_MAX_FAST = 48
|
||||
|
||||
# For calculating maps and guard segments, this
|
||||
# must be set to the global maximum window.
|
||||
WINDOW_MAX = WINDOW_MAX_FAST
|
||||
|
||||
# If the fast rate is sustained for this many request
|
||||
# rounds, the fast link window size will be allowed.
|
||||
FAST_RATE_THRESHOLD = 10
|
||||
|
||||
# If the RTT rate is higher than this value,
|
||||
# the max window size for fast links will be used.
|
||||
RTT_FAST = 0.18
|
||||
RTT_MEDIUM = 0.75
|
||||
RTT_SLOW = 1.45
|
||||
|
||||
# The minimum allowed flexibility of the window size.
|
||||
# The difference between window_max and window_min
|
||||
# will never be smaller than this value.
|
||||
WINDOW_FLEXIBILITY = 4
|
||||
|
||||
SEQ_MAX = 0xFFFF
|
||||
SEQ_MODULUS = SEQ_MAX+1
|
||||
|
||||
def __init__(self, outlet: ChannelOutletBase):
|
||||
"""
|
||||
|
||||
@param outlet:
|
||||
"""
|
||||
self._outlet = outlet
|
||||
self._lock = threading.RLock()
|
||||
self._tx_ring: collections.deque[Envelope] = collections.deque()
|
||||
self._rx_ring: collections.deque[Envelope] = collections.deque()
|
||||
self._message_callbacks: [MessageCallbackType] = []
|
||||
self._next_sequence = 0
|
||||
self._next_rx_sequence = 0
|
||||
self._message_factories: dict[int, Type[MessageBase]] = {}
|
||||
self._max_tries = 5
|
||||
self.fast_rate_rounds = 0
|
||||
self.medium_rate_rounds = 0
|
||||
|
||||
if self._outlet.rtt > Channel.RTT_SLOW:
|
||||
self.window = 1
|
||||
self.window_max = 1
|
||||
self.window_min = 1
|
||||
self.window_flexibility = 1
|
||||
else:
|
||||
self.window = Channel.WINDOW
|
||||
self.window_max = Channel.WINDOW_MAX_SLOW
|
||||
self.window_min = Channel.WINDOW_MIN
|
||||
self.window_flexibility = Channel.WINDOW_FLEXIBILITY
|
||||
|
||||
def __enter__(self) -> Channel:
|
||||
return self
|
||||
|
||||
def __exit__(self, __exc_type: Type[BaseException] | None, __exc_value: BaseException | None,
|
||||
__traceback: TracebackType | None) -> bool | None:
|
||||
self._shutdown()
|
||||
return False
|
||||
|
||||
def register_message_type(self, message_class: Type[MessageBase]):
|
||||
"""
|
||||
Register a message class for reception over a ``Channel``.
|
||||
|
||||
Message classes must extend ``MessageBase``.
|
||||
|
||||
:param message_class: Class to register
|
||||
"""
|
||||
self._register_message_type(message_class, is_system_type=False)
|
||||
|
||||
def _register_message_type(self, message_class: Type[MessageBase], *, is_system_type: bool = False):
|
||||
with self._lock:
|
||||
if not issubclass(message_class, MessageBase):
|
||||
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
|
||||
f"{message_class} is not a subclass of {MessageBase}.")
|
||||
if message_class.MSGTYPE is None:
|
||||
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
|
||||
f"{message_class} has invalid MSGTYPE class attribute.")
|
||||
if message_class.MSGTYPE >= 0xf000 and not is_system_type:
|
||||
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
|
||||
f"{message_class} has system-reserved message type.")
|
||||
try:
|
||||
message_class()
|
||||
except Exception as ex:
|
||||
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
|
||||
f"{message_class} raised an exception when constructed with no arguments: {ex}")
|
||||
|
||||
self._message_factories[message_class.MSGTYPE] = message_class
|
||||
|
||||
def add_message_handler(self, callback: MessageCallbackType):
|
||||
"""
|
||||
Add a handler for incoming messages. A handler
|
||||
has the following signature:
|
||||
|
||||
``(message: MessageBase) -> bool``
|
||||
|
||||
Handlers are processed in the order they are
|
||||
added. If any handler returns True, processing
|
||||
of the message stops; handlers after the
|
||||
returning handler will not be called.
|
||||
|
||||
:param callback: Function to call
|
||||
"""
|
||||
with self._lock:
|
||||
if callback not in self._message_callbacks:
|
||||
self._message_callbacks.append(callback)
|
||||
|
||||
def remove_message_handler(self, callback: MessageCallbackType):
|
||||
"""
|
||||
Remove a handler added with ``add_message_handler``.
|
||||
|
||||
:param callback: handler to remove
|
||||
"""
|
||||
with self._lock:
|
||||
if callback in self._message_callbacks:
|
||||
self._message_callbacks.remove(callback)
|
||||
|
||||
def _shutdown(self):
|
||||
with self._lock:
|
||||
self._message_callbacks.clear()
|
||||
self._clear_rings()
|
||||
|
||||
def _clear_rings(self):
|
||||
with self._lock:
|
||||
for envelope in self._tx_ring:
|
||||
if envelope.packet is not None:
|
||||
self._outlet.set_packet_timeout_callback(envelope.packet, None)
|
||||
self._outlet.set_packet_delivered_callback(envelope.packet, None)
|
||||
self._tx_ring.clear()
|
||||
self._rx_ring.clear()
|
||||
|
||||
def _emplace_envelope(self, envelope: Envelope, ring: collections.deque[Envelope]) -> bool:
|
||||
with self._lock:
|
||||
i = 0
|
||||
|
||||
for existing in ring:
|
||||
|
||||
if envelope.sequence == existing.sequence:
|
||||
RNS.log(f"Envelope: Emplacement of duplicate envelope with sequence "+str(envelope.sequence), RNS.LOG_EXTREME)
|
||||
return False
|
||||
|
||||
if envelope.sequence < existing.sequence and not (self._next_rx_sequence - envelope.sequence) > (Channel.SEQ_MAX//2):
|
||||
ring.insert(i, envelope)
|
||||
|
||||
envelope.tracked = True
|
||||
return True
|
||||
|
||||
i += 1
|
||||
|
||||
envelope.tracked = True
|
||||
ring.append(envelope)
|
||||
|
||||
return True
|
||||
|
||||
def _run_callbacks(self, message: MessageBase):
|
||||
cbs = self._message_callbacks.copy()
|
||||
|
||||
for cb in cbs:
|
||||
try:
|
||||
if cb(message):
|
||||
return
|
||||
except Exception as e:
|
||||
RNS.log("Channel "+str(self)+" experienced an error while running a message callback. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
def _receive(self, raw: bytes):
|
||||
try:
|
||||
envelope = Envelope(outlet=self._outlet, raw=raw)
|
||||
with self._lock:
|
||||
message = envelope.unpack(self._message_factories)
|
||||
|
||||
if envelope.sequence < self._next_rx_sequence:
|
||||
window_overflow = (self._next_rx_sequence+Channel.WINDOW_MAX) % Channel.SEQ_MODULUS
|
||||
if window_overflow < self._next_rx_sequence:
|
||||
if envelope.sequence > window_overflow:
|
||||
RNS.log("Invalid packet sequence ("+str(envelope.sequence)+") received on channel "+str(self), RNS.LOG_EXTREME)
|
||||
return
|
||||
else:
|
||||
RNS.log("Invalid packet sequence ("+str(envelope.sequence)+") received on channel "+str(self), RNS.LOG_EXTREME)
|
||||
return
|
||||
|
||||
is_new = self._emplace_envelope(envelope, self._rx_ring)
|
||||
|
||||
if not is_new:
|
||||
RNS.log("Duplicate message received on channel "+str(self), RNS.LOG_EXTREME)
|
||||
return
|
||||
else:
|
||||
with self._lock:
|
||||
contigous = []
|
||||
for e in self._rx_ring:
|
||||
if e.sequence == self._next_rx_sequence:
|
||||
contigous.append(e)
|
||||
self._next_rx_sequence = (self._next_rx_sequence + 1) % Channel.SEQ_MODULUS
|
||||
if self._next_rx_sequence == 0:
|
||||
for e in self._rx_ring:
|
||||
if e.sequence == self._next_rx_sequence:
|
||||
contigous.append(e)
|
||||
self._next_rx_sequence = (self._next_rx_sequence + 1) % Channel.SEQ_MODULUS
|
||||
|
||||
for e in contigous:
|
||||
if not e.unpacked:
|
||||
m = e.unpack(self._message_factories)
|
||||
else:
|
||||
m = e.message
|
||||
|
||||
self._rx_ring.remove(e)
|
||||
self._run_callbacks(m)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("An error ocurred while receiving data on "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
def is_ready_to_send(self) -> bool:
|
||||
"""
|
||||
Check if ``Channel`` is ready to send.
|
||||
|
||||
:return: True if ready
|
||||
"""
|
||||
if not self._outlet.is_usable:
|
||||
return False
|
||||
|
||||
with self._lock:
|
||||
outstanding = 0
|
||||
for envelope in self._tx_ring:
|
||||
if envelope.outlet == self._outlet:
|
||||
if not envelope.packet or not self._outlet.get_packet_state(envelope.packet) == MessageState.MSGSTATE_DELIVERED:
|
||||
outstanding += 1
|
||||
|
||||
if outstanding >= self.window:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _packet_tx_op(self, packet: TPacket, op: Callable[[TPacket], bool]):
|
||||
with self._lock:
|
||||
envelope = next(filter(lambda e: self._outlet.get_packet_id(e.packet) == self._outlet.get_packet_id(packet),
|
||||
self._tx_ring), None)
|
||||
|
||||
if envelope and op(envelope):
|
||||
envelope.tracked = False
|
||||
if envelope in self._tx_ring:
|
||||
self._tx_ring.remove(envelope)
|
||||
|
||||
if self.window < self.window_max:
|
||||
self.window += 1
|
||||
|
||||
# TODO: Remove at some point
|
||||
# RNS.log("Increased "+str(self)+" window to "+str(self.window), RNS.LOG_DEBUG)
|
||||
|
||||
if self._outlet.rtt != 0:
|
||||
if self._outlet.rtt > Channel.RTT_FAST:
|
||||
self.fast_rate_rounds = 0
|
||||
|
||||
if self._outlet.rtt > Channel.RTT_MEDIUM:
|
||||
self.medium_rate_rounds = 0
|
||||
|
||||
else:
|
||||
self.medium_rate_rounds += 1
|
||||
if self.window_max < Channel.WINDOW_MAX_MEDIUM and self.medium_rate_rounds == Channel.FAST_RATE_THRESHOLD:
|
||||
self.window_max = Channel.WINDOW_MAX_MEDIUM
|
||||
self.window_min = Channel.WINDOW_MIN_LIMIT_MEDIUM
|
||||
# TODO: Remove at some point
|
||||
# RNS.log("Increased "+str(self)+" max window to "+str(self.window_max), RNS.LOG_DEBUG)
|
||||
# RNS.log("Increased "+str(self)+" min window to "+str(self.window_min), RNS.LOG_DEBUG)
|
||||
|
||||
else:
|
||||
self.fast_rate_rounds += 1
|
||||
if self.window_max < Channel.WINDOW_MAX_FAST and self.fast_rate_rounds == Channel.FAST_RATE_THRESHOLD:
|
||||
self.window_max = Channel.WINDOW_MAX_FAST
|
||||
self.window_min = Channel.WINDOW_MIN_LIMIT_FAST
|
||||
# TODO: Remove at some point
|
||||
# RNS.log("Increased "+str(self)+" max window to "+str(self.window_max), RNS.LOG_DEBUG)
|
||||
# RNS.log("Increased "+str(self)+" min window to "+str(self.window_min), RNS.LOG_DEBUG)
|
||||
|
||||
|
||||
else:
|
||||
RNS.log("Envelope not found in TX ring for "+str(self), RNS.LOG_EXTREME)
|
||||
if not envelope:
|
||||
RNS.log("Spurious message received on "+str(self), RNS.LOG_EXTREME)
|
||||
|
||||
def _packet_delivered(self, packet: TPacket):
|
||||
self._packet_tx_op(packet, lambda env: True)
|
||||
|
||||
def _update_packet_timeouts(self):
|
||||
for envelope in self._tx_ring:
|
||||
updated_timeout = self._get_packet_timeout_time(envelope.tries)
|
||||
if envelope.packet and hasattr(envelope.packet, "receipt") and envelope.packet.receipt and envelope.packet.receipt.timeout:
|
||||
if updated_timeout > envelope.packet.receipt.timeout:
|
||||
envelope.packet.receipt.set_timeout(updated_timeout)
|
||||
|
||||
def _get_packet_timeout_time(self, tries: int) -> float:
|
||||
to = pow(1.5, tries - 1) * max(self._outlet.rtt*2.5, 0.025) * (len(self._tx_ring)+1.5)
|
||||
return to
|
||||
|
||||
def _packet_timeout(self, packet: TPacket):
|
||||
def retry_envelope(envelope: Envelope) -> bool:
|
||||
if envelope.tries >= self._max_tries:
|
||||
RNS.log("Retry count exceeded on "+str(self)+", tearing down Link.", RNS.LOG_ERROR)
|
||||
self._shutdown() # start on separate thread?
|
||||
self._outlet.timed_out()
|
||||
return True
|
||||
|
||||
envelope.tries += 1
|
||||
self._outlet.resend(envelope.packet)
|
||||
self._outlet.set_packet_delivered_callback(envelope.packet, self._packet_delivered)
|
||||
self._outlet.set_packet_timeout_callback(envelope.packet, self._packet_timeout, self._get_packet_timeout_time(envelope.tries))
|
||||
self._update_packet_timeouts()
|
||||
|
||||
if self.window > self.window_min:
|
||||
self.window -= 1
|
||||
# TODO: Remove at some point
|
||||
# RNS.log("Decreased "+str(self)+" window to "+str(self.window), RNS.LOG_DEBUG)
|
||||
|
||||
if self.window_max > (self.window_min+self.window_flexibility):
|
||||
self.window_max -= 1
|
||||
# TODO: Remove at some point
|
||||
# RNS.log("Decreased "+str(self)+" max window to "+str(self.window_max), RNS.LOG_DEBUG)
|
||||
|
||||
# TODO: Remove at some point
|
||||
# RNS.log("Decreased "+str(self)+" window to "+str(self.window), RNS.LOG_EXTREME)
|
||||
|
||||
return False
|
||||
|
||||
if self._outlet.get_packet_state(packet) != MessageState.MSGSTATE_DELIVERED:
|
||||
self._packet_tx_op(packet, retry_envelope)
|
||||
|
||||
def send(self, message: MessageBase) -> Envelope:
|
||||
"""
|
||||
Send a message. If a message send is attempted and
|
||||
``Channel`` is not ready, an exception is thrown.
|
||||
|
||||
:param message: an instance of a ``MessageBase`` subclass
|
||||
"""
|
||||
envelope: Envelope | None = None
|
||||
with self._lock:
|
||||
if not self.is_ready_to_send():
|
||||
raise ChannelException(CEType.ME_LINK_NOT_READY, f"Link is not ready")
|
||||
|
||||
envelope = Envelope(self._outlet, message=message, sequence=self._next_sequence)
|
||||
self._next_sequence = (self._next_sequence + 1) % Channel.SEQ_MODULUS
|
||||
self._emplace_envelope(envelope, self._tx_ring)
|
||||
|
||||
if envelope is None:
|
||||
raise BlockingIOError()
|
||||
|
||||
envelope.pack()
|
||||
if len(envelope.raw) > self._outlet.mdu:
|
||||
raise ChannelException(CEType.ME_TOO_BIG, f"Packed message too big for packet: {len(envelope.raw)} > {self._outlet.mdu}")
|
||||
|
||||
envelope.packet = self._outlet.send(envelope.raw)
|
||||
envelope.tries += 1
|
||||
self._outlet.set_packet_delivered_callback(envelope.packet, self._packet_delivered)
|
||||
self._outlet.set_packet_timeout_callback(envelope.packet, self._packet_timeout, self._get_packet_timeout_time(envelope.tries))
|
||||
self._update_packet_timeouts()
|
||||
|
||||
return envelope
|
||||
|
||||
@property
|
||||
def MDU(self):
|
||||
"""
|
||||
Maximum Data Unit: the number of bytes available
|
||||
for a message to consume in a single send. This
|
||||
value is adjusted from the ``Link`` MDU to accommodate
|
||||
message header information.
|
||||
|
||||
:return: number of bytes available
|
||||
"""
|
||||
return self._outlet.mdu - 6 # sizeof(msgtype) + sizeof(length) + sizeof(sequence)
|
||||
|
||||
|
||||
class LinkChannelOutlet(ChannelOutletBase):
|
||||
"""
|
||||
An implementation of ChannelOutletBase for RNS.Link.
|
||||
Allows Channel to send packets over an RNS Link with
|
||||
Packets.
|
||||
|
||||
:param link: RNS Link to wrap
|
||||
"""
|
||||
def __init__(self, link: RNS.Link):
|
||||
self.link = link
|
||||
|
||||
def send(self, raw: bytes) -> RNS.Packet:
|
||||
packet = RNS.Packet(self.link, raw, context=RNS.Packet.CHANNEL)
|
||||
if self.link.status == RNS.Link.ACTIVE:
|
||||
packet.send()
|
||||
return packet
|
||||
|
||||
def resend(self, packet: RNS.Packet) -> RNS.Packet:
|
||||
receipt = packet.resend()
|
||||
if not receipt:
|
||||
RNS.log("Failed to resend packet", RNS.LOG_ERROR)
|
||||
return packet
|
||||
|
||||
@property
|
||||
def mdu(self):
|
||||
return self.link.MDU
|
||||
|
||||
@property
|
||||
def rtt(self):
|
||||
return self.link.rtt
|
||||
|
||||
@property
|
||||
def is_usable(self):
|
||||
return True # had issues looking at Link.status
|
||||
|
||||
def get_packet_state(self, packet: TPacket) -> MessageState:
|
||||
if packet.receipt == None:
|
||||
return MessageState.MSGSTATE_FAILED
|
||||
|
||||
status = packet.receipt.get_status()
|
||||
if status == RNS.PacketReceipt.SENT:
|
||||
return MessageState.MSGSTATE_SENT
|
||||
if status == RNS.PacketReceipt.DELIVERED:
|
||||
return MessageState.MSGSTATE_DELIVERED
|
||||
if status == RNS.PacketReceipt.FAILED:
|
||||
return MessageState.MSGSTATE_FAILED
|
||||
else:
|
||||
raise Exception(f"Unexpected receipt state: {status}")
|
||||
|
||||
def timed_out(self):
|
||||
self.link.teardown()
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.__class__.__name__}({self.link})"
|
||||
|
||||
def set_packet_timeout_callback(self, packet: RNS.Packet, callback: Callable[[RNS.Packet], None] | None,
|
||||
timeout: float | None = None):
|
||||
if timeout and packet.receipt:
|
||||
packet.receipt.set_timeout(timeout)
|
||||
|
||||
def inner(receipt: RNS.PacketReceipt):
|
||||
callback(packet)
|
||||
|
||||
if packet and packet.receipt:
|
||||
packet.receipt.set_timeout_callback(inner if callback else None)
|
||||
|
||||
def set_packet_delivered_callback(self, packet: RNS.Packet, callback: Callable[[RNS.Packet], None] | None):
|
||||
def inner(receipt: RNS.PacketReceipt):
|
||||
callback(packet)
|
||||
|
||||
if packet and packet.receipt:
|
||||
packet.receipt.set_delivery_callback(inner if callback else None)
|
||||
|
||||
def get_packet_id(self, packet: RNS.Packet) -> any:
|
||||
if packet and hasattr(packet, "get_hash") and callable(packet.get_hash):
|
||||
return packet.get_hash()
|
||||
else:
|
||||
return None
|
|
@ -0,0 +1,68 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2022 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import RNS.Cryptography.Provider as cp
|
||||
import RNS.vendor.platformutils as pu
|
||||
|
||||
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
|
||||
from .aes import AES
|
||||
|
||||
elif cp.PROVIDER == cp.PROVIDER_PYCA:
|
||||
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
|
||||
|
||||
if pu.cryptography_old_api():
|
||||
from cryptography.hazmat.backends import default_backend
|
||||
|
||||
|
||||
class AES_128_CBC:
|
||||
|
||||
@staticmethod
|
||||
def encrypt(plaintext, key, iv):
|
||||
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
|
||||
cipher = AES(key)
|
||||
return cipher.encrypt(plaintext, iv)
|
||||
|
||||
elif cp.PROVIDER == cp.PROVIDER_PYCA:
|
||||
if not pu.cryptography_old_api():
|
||||
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))
|
||||
else:
|
||||
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
|
||||
|
||||
encryptor = cipher.encryptor()
|
||||
ciphertext = encryptor.update(plaintext) + encryptor.finalize()
|
||||
return ciphertext
|
||||
|
||||
@staticmethod
|
||||
def decrypt(ciphertext, key, iv):
|
||||
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
|
||||
cipher = AES(key)
|
||||
return cipher.decrypt(ciphertext, iv)
|
||||
|
||||
elif cp.PROVIDER == cp.PROVIDER_PYCA:
|
||||
if not pu.cryptography_old_api():
|
||||
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))
|
||||
else:
|
||||
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
|
||||
|
||||
decryptor = cipher.decryptor()
|
||||
plaintext = decryptor.update(ciphertext) + decryptor.finalize()
|
||||
return plaintext
|
|
@ -0,0 +1,41 @@
|
|||
import os
|
||||
from .pure25519 import ed25519_oop as ed25519
|
||||
|
||||
class Ed25519PrivateKey:
|
||||
def __init__(self, seed):
|
||||
self.seed = seed
|
||||
self.sk = ed25519.SigningKey(self.seed)
|
||||
#self.vk = self.sk.get_verifying_key()
|
||||
|
||||
@classmethod
|
||||
def generate(cls):
|
||||
return cls.from_private_bytes(os.urandom(32))
|
||||
|
||||
@classmethod
|
||||
def from_private_bytes(cls, data):
|
||||
return cls(seed=data)
|
||||
|
||||
def private_bytes(self):
|
||||
return self.seed
|
||||
|
||||
def public_key(self):
|
||||
return Ed25519PublicKey.from_public_bytes(self.sk.vk_s)
|
||||
|
||||
def sign(self, message):
|
||||
return self.sk.sign(message)
|
||||
|
||||
|
||||
class Ed25519PublicKey:
|
||||
def __init__(self, seed):
|
||||
self.seed = seed
|
||||
self.vk = ed25519.VerifyingKey(self.seed)
|
||||
|
||||
@classmethod
|
||||
def from_public_bytes(cls, data):
|
||||
return cls(data)
|
||||
|
||||
def public_bytes(self):
|
||||
return self.vk.to_bytes()
|
||||
|
||||
def verify(self, signature, message):
|
||||
self.vk.verify(signature, message)
|
|
@ -0,0 +1,110 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2022 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
from RNS.Cryptography import HMAC
|
||||
from RNS.Cryptography import PKCS7
|
||||
from RNS.Cryptography.AES import AES_128_CBC
|
||||
|
||||
class Fernet():
|
||||
"""
|
||||
This class provides a slightly modified implementation of the Fernet spec
|
||||
found at: https://github.com/fernet/spec/blob/master/Spec.md
|
||||
|
||||
According to the spec, a Fernet token includes a one byte VERSION and
|
||||
eight byte TIMESTAMP field at the start of each token. These fields are
|
||||
not relevant to Reticulum. They are therefore stripped from this
|
||||
implementation, since they incur overhead and leak initiator metadata.
|
||||
"""
|
||||
FERNET_OVERHEAD = 48 # Bytes
|
||||
|
||||
@staticmethod
|
||||
def generate_key():
|
||||
return os.urandom(32)
|
||||
|
||||
def __init__(self, key = None):
|
||||
if key == None:
|
||||
raise ValueError("Token key cannot be None")
|
||||
|
||||
if len(key) != 32:
|
||||
raise ValueError("Token key must be 32 bytes, not "+str(len(key)))
|
||||
|
||||
self._signing_key = key[:16]
|
||||
self._encryption_key = key[16:]
|
||||
|
||||
|
||||
def verify_hmac(self, token):
|
||||
if len(token) <= 32:
|
||||
raise ValueError("Cannot verify HMAC on token of only "+str(len(token))+" bytes")
|
||||
else:
|
||||
received_hmac = token[-32:]
|
||||
expected_hmac = HMAC.new(self._signing_key, token[:-32]).digest()
|
||||
|
||||
if received_hmac == expected_hmac:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def encrypt(self, data = None):
|
||||
iv = os.urandom(16)
|
||||
current_time = int(time.time())
|
||||
|
||||
if not isinstance(data, bytes):
|
||||
raise TypeError("Token plaintext input must be bytes")
|
||||
|
||||
ciphertext = AES_128_CBC.encrypt(
|
||||
plaintext = PKCS7.pad(data),
|
||||
key = self._encryption_key,
|
||||
iv = iv,
|
||||
)
|
||||
|
||||
signed_parts = iv+ciphertext
|
||||
|
||||
return signed_parts + HMAC.new(self._signing_key, signed_parts).digest()
|
||||
|
||||
|
||||
def decrypt(self, token = None):
|
||||
if not isinstance(token, bytes):
|
||||
raise TypeError("Token must be bytes")
|
||||
|
||||
if not self.verify_hmac(token):
|
||||
raise ValueError("Token HMAC was invalid")
|
||||
|
||||
iv = token[:16]
|
||||
ciphertext = token[16:-32]
|
||||
|
||||
try:
|
||||
plaintext = PKCS7.unpad(
|
||||
AES_128_CBC.decrypt(
|
||||
ciphertext,
|
||||
self._encryption_key,
|
||||
iv,
|
||||
)
|
||||
)
|
||||
|
||||
return plaintext
|
||||
|
||||
except Exception as e:
|
||||
raise ValueError("Could not decrypt token")
|
|
@ -0,0 +1,54 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2022 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import hashlib
|
||||
from math import ceil
|
||||
from RNS.Cryptography import HMAC
|
||||
|
||||
def hkdf(length=None, derive_from=None, salt=None, context=None):
|
||||
hash_len = 32
|
||||
|
||||
def hmac_sha256(key, data):
|
||||
return HMAC.new(key, data).digest()
|
||||
|
||||
if length == None or length < 1:
|
||||
raise ValueError("Invalid output key length")
|
||||
|
||||
if derive_from == None or derive_from == "":
|
||||
raise ValueError("Cannot derive key from empty input material")
|
||||
|
||||
if salt == None or len(salt) == 0:
|
||||
salt = bytes([0] * hash_len)
|
||||
|
||||
if context == None:
|
||||
context = b""
|
||||
|
||||
pseudorandom_key = hmac_sha256(salt, derive_from)
|
||||
|
||||
block = b""
|
||||
derived = b""
|
||||
|
||||
for i in range(ceil(length / hash_len)):
|
||||
block = hmac_sha256(pseudorandom_key, block + context + bytes([i + 1]))
|
||||
derived += block
|
||||
|
||||
return derived[:length]
|
|
@ -0,0 +1,183 @@
|
|||
# This HMAC implementation comes directly from the HMAC implementation
|
||||
# included in Python 3.10.4, and is almost completely identical. It has
|
||||
# been modified to be a pure Python implementation, that is not dependent
|
||||
# on the system having OpenSSL binaries installed.
|
||||
|
||||
import warnings as _warnings
|
||||
import hashlib as _hashlib
|
||||
|
||||
trans_5C = bytes((x ^ 0x5C) for x in range(256))
|
||||
trans_36 = bytes((x ^ 0x36) for x in range(256))
|
||||
|
||||
# The size of the digests returned by HMAC depends on the underlying
|
||||
# hashing module used. Use digest_size from the instance of HMAC instead.
|
||||
digest_size = None
|
||||
|
||||
|
||||
class HMAC:
|
||||
"""RFC 2104 HMAC class. Also complies with RFC 4231.
|
||||
This supports the API for Cryptographic Hash Functions (PEP 247).
|
||||
"""
|
||||
blocksize = 64 # 512-bit HMAC; can be changed in subclasses.
|
||||
|
||||
__slots__ = (
|
||||
"_hmac", "_inner", "_outer", "block_size", "digest_size"
|
||||
)
|
||||
|
||||
def __init__(self, key, msg=None, digestmod=_hashlib.sha256):
|
||||
"""Create a new HMAC object.
|
||||
key: bytes or buffer, key for the keyed hash object.
|
||||
msg: bytes or buffer, Initial input for the hash or None.
|
||||
digestmod: A hash name suitable for hashlib.new(). *OR*
|
||||
A hashlib constructor returning a new hash object. *OR*
|
||||
A module supporting PEP 247.
|
||||
Required as of 3.8, despite its position after the optional
|
||||
msg argument. Passing it as a keyword argument is
|
||||
recommended, though not required for legacy API reasons.
|
||||
"""
|
||||
|
||||
if not isinstance(key, (bytes, bytearray)):
|
||||
raise TypeError("key: expected bytes or bytearray, but got %r" % type(key).__name__)
|
||||
|
||||
if not digestmod:
|
||||
raise TypeError("Missing required parameter 'digestmod'.")
|
||||
|
||||
self._hmac_init(key, msg, digestmod)
|
||||
|
||||
def _hmac_init(self, key, msg, digestmod):
|
||||
if callable(digestmod):
|
||||
digest_cons = digestmod
|
||||
elif isinstance(digestmod, str):
|
||||
digest_cons = lambda d=b'': _hashlib.new(digestmod, d)
|
||||
else:
|
||||
digest_cons = lambda d=b'': digestmod.new(d)
|
||||
|
||||
self._hmac = None
|
||||
self._outer = digest_cons()
|
||||
self._inner = digest_cons()
|
||||
self.digest_size = self._inner.digest_size
|
||||
|
||||
if hasattr(self._inner, 'block_size'):
|
||||
blocksize = self._inner.block_size
|
||||
if blocksize < 16:
|
||||
_warnings.warn('block_size of %d seems too small; using our '
|
||||
'default of %d.' % (blocksize, self.blocksize),
|
||||
RuntimeWarning, 2)
|
||||
blocksize = self.blocksize
|
||||
else:
|
||||
_warnings.warn('No block_size attribute on given digest object; '
|
||||
'Assuming %d.' % (self.blocksize),
|
||||
RuntimeWarning, 2)
|
||||
blocksize = self.blocksize
|
||||
|
||||
if len(key) > blocksize:
|
||||
key = digest_cons(key).digest()
|
||||
|
||||
# self.blocksize is the default blocksize. self.block_size is
|
||||
# effective block size as well as the public API attribute.
|
||||
self.block_size = blocksize
|
||||
|
||||
key = key.ljust(blocksize, b'\0')
|
||||
self._outer.update(key.translate(trans_5C))
|
||||
self._inner.update(key.translate(trans_36))
|
||||
if msg is not None:
|
||||
self.update(msg)
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
if self._hmac:
|
||||
return self._hmac.name
|
||||
else:
|
||||
return f"hmac-{self._inner.name}"
|
||||
|
||||
def update(self, msg):
|
||||
"""Feed data from msg into this hashing object."""
|
||||
inst = self._hmac or self._inner
|
||||
inst.update(msg)
|
||||
|
||||
def copy(self):
|
||||
"""Return a separate copy of this hashing object.
|
||||
An update to this copy won't affect the original object.
|
||||
"""
|
||||
# Call __new__ directly to avoid the expensive __init__.
|
||||
other = self.__class__.__new__(self.__class__)
|
||||
other.digest_size = self.digest_size
|
||||
if self._hmac:
|
||||
other._hmac = self._hmac.copy()
|
||||
other._inner = other._outer = None
|
||||
else:
|
||||
other._hmac = None
|
||||
other._inner = self._inner.copy()
|
||||
other._outer = self._outer.copy()
|
||||
return other
|
||||
|
||||
def _current(self):
|
||||
"""Return a hash object for the current state.
|
||||
To be used only internally with digest() and hexdigest().
|
||||
"""
|
||||
if self._hmac:
|
||||
return self._hmac
|
||||
else:
|
||||
h = self._outer.copy()
|
||||
h.update(self._inner.digest())
|
||||
return h
|
||||
|
||||
def digest(self):
|
||||
"""Return the hash value of this hashing object.
|
||||
This returns the hmac value as bytes. The object is
|
||||
not altered in any way by this function; you can continue
|
||||
updating the object after calling this function.
|
||||
"""
|
||||
h = self._current()
|
||||
return h.digest()
|
||||
|
||||
def hexdigest(self):
|
||||
"""Like digest(), but returns a string of hexadecimal digits instead.
|
||||
"""
|
||||
h = self._current()
|
||||
return h.hexdigest()
|
||||
|
||||
def new(key, msg=None, digestmod=_hashlib.sha256):
|
||||
"""Create a new hashing object and return it.
|
||||
key: bytes or buffer, The starting key for the hash.
|
||||
msg: bytes or buffer, Initial input for the hash, or None.
|
||||
digestmod: A hash name suitable for hashlib.new(). *OR*
|
||||
A hashlib constructor returning a new hash object. *OR*
|
||||
A module supporting PEP 247.
|
||||
Required as of 3.8, despite its position after the optional
|
||||
msg argument. Passing it as a keyword argument is
|
||||
recommended, though not required for legacy API reasons.
|
||||
You can now feed arbitrary bytes into the object using its update()
|
||||
method, and can ask for the hash value at any time by calling its digest()
|
||||
or hexdigest() methods.
|
||||
"""
|
||||
return HMAC(key, msg, digestmod)
|
||||
|
||||
|
||||
def digest(key, msg, digest):
|
||||
"""Fast inline implementation of HMAC.
|
||||
key: bytes or buffer, The key for the keyed hash object.
|
||||
msg: bytes or buffer, Input message.
|
||||
digest: A hash name suitable for hashlib.new() for best performance. *OR*
|
||||
A hashlib constructor returning a new hash object. *OR*
|
||||
A module supporting PEP 247.
|
||||
"""
|
||||
if callable(digest):
|
||||
digest_cons = digest
|
||||
elif isinstance(digest, str):
|
||||
digest_cons = lambda d=b'': _hashlib.new(digest, d)
|
||||
else:
|
||||
digest_cons = lambda d=b'': digest.new(d)
|
||||
|
||||
inner = digest_cons()
|
||||
outer = digest_cons()
|
||||
blocksize = getattr(inner, 'block_size', 64)
|
||||
if len(key) > blocksize:
|
||||
key = digest_cons(key).digest()
|
||||
|
||||
key = key + b'\x00' * (blocksize - len(key))
|
||||
inner.update(key.translate(trans_36))
|
||||
outer.update(key.translate(trans_5C))
|
||||
inner.update(msg)
|
||||
outer.update(inner.digest())
|
||||
return outer.digest()
|
|
@ -0,0 +1,34 @@
|
|||
import importlib
|
||||
if importlib.util.find_spec('hashlib') != None:
|
||||
import hashlib
|
||||
else:
|
||||
hashlib = None
|
||||
|
||||
if hasattr(hashlib, "sha512"):
|
||||
from hashlib import sha512 as ext_sha512
|
||||
else:
|
||||
from .SHA512 import sha512 as ext_sha512
|
||||
|
||||
if hasattr(hashlib, "sha256"):
|
||||
from hashlib import sha256 as ext_sha256
|
||||
else:
|
||||
from .SHA256 import sha256 as ext_sha256
|
||||
|
||||
"""
|
||||
The SHA primitives are abstracted here to allow platform-
|
||||
aware hardware acceleration in the future. Currently only
|
||||
uses Python's internal SHA-256 implementation. All SHA-256
|
||||
calls in RNS end up here.
|
||||
"""
|
||||
|
||||
def sha256(data):
|
||||
digest = ext_sha256()
|
||||
digest.update(data)
|
||||
|
||||
return digest.digest()
|
||||
|
||||
def sha512(data):
|
||||
digest = ext_sha512()
|
||||
digest.update(data)
|
||||
|
||||
return digest.digest()
|
|
@ -0,0 +1,40 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2022 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
class PKCS7:
|
||||
BLOCKSIZE = 16
|
||||
|
||||
@staticmethod
|
||||
def pad(data, bs=BLOCKSIZE):
|
||||
l = len(data)
|
||||
n = bs-l%bs
|
||||
v = bytes([n])
|
||||
return data+v*n
|
||||
|
||||
@staticmethod
|
||||
def unpad(data, bs=BLOCKSIZE):
|
||||
l = len(data)
|
||||
n = data[-1]
|
||||
if n > bs:
|
||||
raise ValueError("Cannot unpad, invalid padding length of "+str(n)+" bytes")
|
||||
else:
|
||||
return data[:l-n]
|
|
@ -0,0 +1,38 @@
|
|||
import importlib
|
||||
|
||||
PROVIDER_NONE = 0x00
|
||||
PROVIDER_INTERNAL = 0x01
|
||||
PROVIDER_PYCA = 0x02
|
||||
|
||||
PROVIDER = PROVIDER_NONE
|
||||
|
||||
pyca_v = None
|
||||
use_pyca = False
|
||||
|
||||
try:
|
||||
if importlib.util.find_spec('cryptography') != None:
|
||||
import cryptography
|
||||
pyca_v = cryptography.__version__
|
||||
v = pyca_v.split(".")
|
||||
|
||||
if int(v[0]) == 2:
|
||||
if int(v[1]) >= 8:
|
||||
use_pyca = True
|
||||
elif int(v[0]) >= 3:
|
||||
use_pyca = True
|
||||
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
if use_pyca:
|
||||
PROVIDER = PROVIDER_PYCA
|
||||
else:
|
||||
PROVIDER = PROVIDER_INTERNAL
|
||||
|
||||
def backend():
|
||||
if PROVIDER == PROVIDER_NONE:
|
||||
return "none"
|
||||
elif PROVIDER == PROVIDER_INTERNAL:
|
||||
return "internal"
|
||||
elif PROVIDER == PROVIDER_PYCA:
|
||||
return "openssl, PyCA "+str(pyca_v)
|
|
@ -0,0 +1,90 @@
|
|||
from cryptography.hazmat.primitives import serialization
|
||||
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey, Ed25519PublicKey
|
||||
from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey, X25519PublicKey
|
||||
|
||||
# These proxy classes exist to create a uniform API accross
|
||||
# cryptography primitive providers.
|
||||
|
||||
class X25519PrivateKeyProxy:
|
||||
def __init__(self, real):
|
||||
self.real = real
|
||||
|
||||
@classmethod
|
||||
def generate(cls):
|
||||
return cls(X25519PrivateKey.generate())
|
||||
|
||||
@classmethod
|
||||
def from_private_bytes(cls, data):
|
||||
return cls(X25519PrivateKey.from_private_bytes(data))
|
||||
|
||||
def private_bytes(self):
|
||||
return self.real.private_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PrivateFormat.Raw,
|
||||
encryption_algorithm=serialization.NoEncryption(),
|
||||
)
|
||||
|
||||
def public_key(self):
|
||||
return X25519PublicKeyProxy(self.real.public_key())
|
||||
|
||||
def exchange(self, peer_public_key):
|
||||
return self.real.exchange(peer_public_key.real)
|
||||
|
||||
|
||||
class X25519PublicKeyProxy:
|
||||
def __init__(self, real):
|
||||
self.real = real
|
||||
|
||||
@classmethod
|
||||
def from_public_bytes(cls, data):
|
||||
return cls(X25519PublicKey.from_public_bytes(data))
|
||||
|
||||
def public_bytes(self):
|
||||
return self.real.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
|
||||
|
||||
class Ed25519PrivateKeyProxy:
|
||||
def __init__(self, real):
|
||||
self.real = real
|
||||
|
||||
@classmethod
|
||||
def generate(cls):
|
||||
return cls(Ed25519PrivateKey.generate())
|
||||
|
||||
@classmethod
|
||||
def from_private_bytes(cls, data):
|
||||
return cls(Ed25519PrivateKey.from_private_bytes(data))
|
||||
|
||||
def private_bytes(self):
|
||||
return self.real.private_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PrivateFormat.Raw,
|
||||
encryption_algorithm=serialization.NoEncryption()
|
||||
)
|
||||
|
||||
def public_key(self):
|
||||
return Ed25519PublicKeyProxy(self.real.public_key())
|
||||
|
||||
def sign(self, message):
|
||||
return self.real.sign(message)
|
||||
|
||||
|
||||
class Ed25519PublicKeyProxy:
|
||||
def __init__(self, real):
|
||||
self.real = real
|
||||
|
||||
@classmethod
|
||||
def from_public_bytes(cls, data):
|
||||
return cls(Ed25519PublicKey.from_public_bytes(data))
|
||||
|
||||
def public_bytes(self):
|
||||
return self.real.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
|
||||
def verify(self, signature, message):
|
||||
self.real.verify(signature, message)
|
|
@ -0,0 +1,129 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2017 Thomas Dixon
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import copy
|
||||
import struct
|
||||
import sys
|
||||
|
||||
|
||||
def new(m=None):
|
||||
return sha256(m)
|
||||
|
||||
class sha256(object):
|
||||
_k = (0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
|
||||
0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
|
||||
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
|
||||
0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
|
||||
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
|
||||
0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
|
||||
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
|
||||
0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
|
||||
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
|
||||
0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
|
||||
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
|
||||
0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
|
||||
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
|
||||
0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
|
||||
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
|
||||
0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2)
|
||||
_h = (0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
|
||||
0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19)
|
||||
_output_size = 8
|
||||
|
||||
blocksize = 1
|
||||
block_size = 64
|
||||
digest_size = 32
|
||||
|
||||
def __init__(self, m=None):
|
||||
self._buffer = b""
|
||||
self._counter = 0
|
||||
|
||||
if m is not None:
|
||||
if type(m) is not bytes:
|
||||
raise TypeError('%s() argument 1 must be bytes, not %s' % (self.__class__.__name__, type(m).__name__))
|
||||
self.update(m)
|
||||
|
||||
def _rotr(self, x, y):
|
||||
return ((x >> y) | (x << (32-y))) & 0xFFFFFFFF
|
||||
|
||||
def _sha256_process(self, c):
|
||||
w = [0]*64
|
||||
w[0:16] = struct.unpack('!16L', c)
|
||||
|
||||
for i in range(16, 64):
|
||||
s0 = self._rotr(w[i-15], 7) ^ self._rotr(w[i-15], 18) ^ (w[i-15] >> 3)
|
||||
s1 = self._rotr(w[i-2], 17) ^ self._rotr(w[i-2], 19) ^ (w[i-2] >> 10)
|
||||
w[i] = (w[i-16] + s0 + w[i-7] + s1) & 0xFFFFFFFF
|
||||
|
||||
a,b,c,d,e,f,g,h = self._h
|
||||
|
||||
for i in range(64):
|
||||
s0 = self._rotr(a, 2) ^ self._rotr(a, 13) ^ self._rotr(a, 22)
|
||||
maj = (a & b) ^ (a & c) ^ (b & c)
|
||||
t2 = s0 + maj
|
||||
s1 = self._rotr(e, 6) ^ self._rotr(e, 11) ^ self._rotr(e, 25)
|
||||
ch = (e & f) ^ ((~e) & g)
|
||||
t1 = h + s1 + ch + self._k[i] + w[i]
|
||||
|
||||
h = g
|
||||
g = f
|
||||
f = e
|
||||
e = (d + t1) & 0xFFFFFFFF
|
||||
d = c
|
||||
c = b
|
||||
b = a
|
||||
a = (t1 + t2) & 0xFFFFFFFF
|
||||
|
||||
self._h = [(x+y) & 0xFFFFFFFF for x,y in zip(self._h, [a,b,c,d,e,f,g,h])]
|
||||
|
||||
def update(self, m):
|
||||
if not m:
|
||||
return
|
||||
|
||||
if type(m) is not bytes:
|
||||
raise TypeError('%s() argument 1 must be bytes, not %s' % (sys._getframe().f_code.co_name, type(m).__name__))
|
||||
|
||||
self._buffer += m
|
||||
self._counter += len(m)
|
||||
|
||||
while len(self._buffer) >= 64:
|
||||
self._sha256_process(self._buffer[:64])
|
||||
self._buffer = self._buffer[64:]
|
||||
|
||||
def digest(self):
|
||||
mdi = self._counter & 0x3F
|
||||
length = struct.pack('!Q', self._counter<<3)
|
||||
|
||||
if mdi < 56:
|
||||
padlen = 55-mdi
|
||||
else:
|
||||
padlen = 119-mdi
|
||||
|
||||
r = self.copy()
|
||||
r.update(b'\x80'+(b'\x00'*padlen)+length)
|
||||
return b''.join([struct.pack('!L', i) for i in r._h[:self._output_size]])
|
||||
|
||||
def hexdigest(self):
|
||||
return self.digest().encode('hex')
|
||||
|
||||
def copy(self):
|
||||
return copy.deepcopy(self)
|
|
@ -0,0 +1,129 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2017 Thomas Dixon
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import copy, struct, sys
|
||||
|
||||
def new(m=None):
|
||||
return sha512(m)
|
||||
|
||||
class sha512(object):
|
||||
_k = (0x428a2f98d728ae22, 0x7137449123ef65cd, 0xb5c0fbcfec4d3b2f, 0xe9b5dba58189dbbc,
|
||||
0x3956c25bf348b538, 0x59f111f1b605d019, 0x923f82a4af194f9b, 0xab1c5ed5da6d8118,
|
||||
0xd807aa98a3030242, 0x12835b0145706fbe, 0x243185be4ee4b28c, 0x550c7dc3d5ffb4e2,
|
||||
0x72be5d74f27b896f, 0x80deb1fe3b1696b1, 0x9bdc06a725c71235, 0xc19bf174cf692694,
|
||||
0xe49b69c19ef14ad2, 0xefbe4786384f25e3, 0x0fc19dc68b8cd5b5, 0x240ca1cc77ac9c65,
|
||||
0x2de92c6f592b0275, 0x4a7484aa6ea6e483, 0x5cb0a9dcbd41fbd4, 0x76f988da831153b5,
|
||||
0x983e5152ee66dfab, 0xa831c66d2db43210, 0xb00327c898fb213f, 0xbf597fc7beef0ee4,
|
||||
0xc6e00bf33da88fc2, 0xd5a79147930aa725, 0x06ca6351e003826f, 0x142929670a0e6e70,
|
||||
0x27b70a8546d22ffc, 0x2e1b21385c26c926, 0x4d2c6dfc5ac42aed, 0x53380d139d95b3df,
|
||||
0x650a73548baf63de, 0x766a0abb3c77b2a8, 0x81c2c92e47edaee6, 0x92722c851482353b,
|
||||
0xa2bfe8a14cf10364, 0xa81a664bbc423001, 0xc24b8b70d0f89791, 0xc76c51a30654be30,
|
||||
0xd192e819d6ef5218, 0xd69906245565a910, 0xf40e35855771202a, 0x106aa07032bbd1b8,
|
||||
0x19a4c116b8d2d0c8, 0x1e376c085141ab53, 0x2748774cdf8eeb99, 0x34b0bcb5e19b48a8,
|
||||
0x391c0cb3c5c95a63, 0x4ed8aa4ae3418acb, 0x5b9cca4f7763e373, 0x682e6ff3d6b2b8a3,
|
||||
0x748f82ee5defb2fc, 0x78a5636f43172f60, 0x84c87814a1f0ab72, 0x8cc702081a6439ec,
|
||||
0x90befffa23631e28, 0xa4506cebde82bde9, 0xbef9a3f7b2c67915, 0xc67178f2e372532b,
|
||||
0xca273eceea26619c, 0xd186b8c721c0c207, 0xeada7dd6cde0eb1e, 0xf57d4f7fee6ed178,
|
||||
0x06f067aa72176fba, 0x0a637dc5a2c898a6, 0x113f9804bef90dae, 0x1b710b35131c471b,
|
||||
0x28db77f523047d84, 0x32caab7b40c72493, 0x3c9ebe0a15c9bebc, 0x431d67c49c100d4c,
|
||||
0x4cc5d4becb3e42b6, 0x597f299cfc657e2a, 0x5fcb6fab3ad6faec, 0x6c44198c4a475817)
|
||||
_h = (0x6a09e667f3bcc908, 0xbb67ae8584caa73b, 0x3c6ef372fe94f82b, 0xa54ff53a5f1d36f1,
|
||||
0x510e527fade682d1, 0x9b05688c2b3e6c1f, 0x1f83d9abfb41bd6b, 0x5be0cd19137e2179)
|
||||
_output_size = 8
|
||||
|
||||
blocksize = 1
|
||||
block_size = 128
|
||||
digest_size = 64
|
||||
|
||||
def __init__(self, m=None):
|
||||
self._buffer = b''
|
||||
self._counter = 0
|
||||
|
||||
if m is not None:
|
||||
if type(m) is not bytes:
|
||||
raise TypeError('%s() argument 1 must be bytes, not %s' % (self.__class__.__name__, type(m).__name__))
|
||||
self.update(m)
|
||||
|
||||
def _rotr(self, x, y):
|
||||
return ((x >> y) | (x << (64-y))) & 0xFFFFFFFFFFFFFFFF
|
||||
|
||||
def _sha512_process(self, chunk):
|
||||
w = [0]*80
|
||||
w[0:16] = struct.unpack('!16Q', chunk)
|
||||
|
||||
for i in range(16, 80):
|
||||
s0 = self._rotr(w[i-15], 1) ^ self._rotr(w[i-15], 8) ^ (w[i-15] >> 7)
|
||||
s1 = self._rotr(w[i-2], 19) ^ self._rotr(w[i-2], 61) ^ (w[i-2] >> 6)
|
||||
w[i] = (w[i-16] + s0 + w[i-7] + s1) & 0xFFFFFFFFFFFFFFFF
|
||||
|
||||
a,b,c,d,e,f,g,h = self._h
|
||||
|
||||
for i in range(80):
|
||||
s0 = self._rotr(a, 28) ^ self._rotr(a, 34) ^ self._rotr(a, 39)
|
||||
maj = (a & b) ^ (a & c) ^ (b & c)
|
||||
t2 = s0 + maj
|
||||
s1 = self._rotr(e, 14) ^ self._rotr(e, 18) ^ self._rotr(e, 41)
|
||||
ch = (e & f) ^ ((~e) & g)
|
||||
t1 = h + s1 + ch + self._k[i] + w[i]
|
||||
|
||||
h = g
|
||||
g = f
|
||||
f = e
|
||||
e = (d + t1) & 0xFFFFFFFFFFFFFFFF
|
||||
d = c
|
||||
c = b
|
||||
b = a
|
||||
a = (t1 + t2) & 0xFFFFFFFFFFFFFFFF
|
||||
|
||||
self._h = [(x+y) & 0xFFFFFFFFFFFFFFFF for x,y in zip(self._h, [a,b,c,d,e,f,g,h])]
|
||||
|
||||
def update(self, m):
|
||||
if not m:
|
||||
return
|
||||
if type(m) is not bytes:
|
||||
raise TypeError('%s() argument 1 must be bytes, not %s' % (sys._getframe().f_code.co_name, type(m).__name__))
|
||||
|
||||
self._buffer += m
|
||||
self._counter += len(m)
|
||||
|
||||
while len(self._buffer) >= 128:
|
||||
self._sha512_process(self._buffer[:128])
|
||||
self._buffer = self._buffer[128:]
|
||||
|
||||
def digest(self):
|
||||
mdi = self._counter & 0x7F
|
||||
length = struct.pack('!Q', self._counter<<3)
|
||||
|
||||
if mdi < 112:
|
||||
padlen = 111-mdi
|
||||
else:
|
||||
padlen = 239-mdi
|
||||
|
||||
r = self.copy()
|
||||
r.update(b'\x80'+(b'\x00'*(padlen+8))+length)
|
||||
return b''.join([struct.pack('!Q', i) for i in r._h[:self._output_size]])
|
||||
|
||||
def hexdigest(self):
|
||||
return self.digest().encode('hex')
|
||||
|
||||
def copy(self):
|
||||
return copy.deepcopy(self)
|
|
@ -0,0 +1,171 @@
|
|||
# By Nicko van Someren, 2021. This code is released into the public domain.
|
||||
# Small modifications for use in Reticulum, and constant time key exchange
|
||||
# added by Mark Qvist in 2022.
|
||||
|
||||
# WARNING! Only the X25519PrivateKey.exchange() method attempts to hide execution time.
|
||||
# In the context of Reticulum, this is sufficient, but it may not be in other systems. If
|
||||
# this code is to be used to provide cryptographic security in an environment where the
|
||||
# start and end times of the execution can be guessed, inferred or measured then it is
|
||||
# critical that steps are taken to hide the execution time, for instance by adding a
|
||||
# delay so that encrypted packets are not sent until a fixed time after the _start_ of
|
||||
# execution.
|
||||
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
P = 2 ** 255 - 19
|
||||
_A = 486662
|
||||
|
||||
|
||||
def _point_add(point_n, point_m, point_diff):
|
||||
"""Given the projection of two points and their difference, return their sum"""
|
||||
(xn, zn) = point_n
|
||||
(xm, zm) = point_m
|
||||
(x_diff, z_diff) = point_diff
|
||||
x = (z_diff << 2) * (xm * xn - zm * zn) ** 2
|
||||
z = (x_diff << 2) * (xm * zn - zm * xn) ** 2
|
||||
return x % P, z % P
|
||||
|
||||
|
||||
def _point_double(point_n):
|
||||
"""Double a point provided in projective coordinates"""
|
||||
(xn, zn) = point_n
|
||||
xn2 = xn ** 2
|
||||
zn2 = zn ** 2
|
||||
x = (xn2 - zn2) ** 2
|
||||
xzn = xn * zn
|
||||
z = 4 * xzn * (xn2 + _A * xzn + zn2)
|
||||
return x % P, z % P
|
||||
|
||||
|
||||
def _const_time_swap(a, b, swap):
|
||||
"""Swap two values in constant time"""
|
||||
index = int(swap) * 2
|
||||
temp = (a, b, b, a)
|
||||
return temp[index:index+2]
|
||||
|
||||
|
||||
def _raw_curve25519(base, n):
|
||||
"""Raise the point base to the power n"""
|
||||
zero = (1, 0)
|
||||
one = (base, 1)
|
||||
mP, m1P = zero, one
|
||||
|
||||
for i in reversed(range(256)):
|
||||
bit = bool(n & (1 << i))
|
||||
mP, m1P = _const_time_swap(mP, m1P, bit)
|
||||
mP, m1P = _point_double(mP), _point_add(mP, m1P, one)
|
||||
mP, m1P = _const_time_swap(mP, m1P, bit)
|
||||
|
||||
x, z = mP
|
||||
inv_z = pow(z, P - 2, P)
|
||||
return (x * inv_z) % P
|
||||
|
||||
|
||||
def _unpack_number(s):
|
||||
"""Unpack 32 bytes to a 256 bit value"""
|
||||
if len(s) != 32:
|
||||
raise ValueError('Curve25519 values must be 32 bytes')
|
||||
return int.from_bytes(s, "little")
|
||||
|
||||
|
||||
def _pack_number(n):
|
||||
"""Pack a value into 32 bytes"""
|
||||
return n.to_bytes(32, "little")
|
||||
|
||||
|
||||
def _fix_secret(n):
|
||||
"""Mask a value to be an acceptable exponent"""
|
||||
n &= ~7
|
||||
n &= ~(128 << 8 * 31)
|
||||
n |= 64 << 8 * 31
|
||||
return n
|
||||
|
||||
|
||||
def curve25519(base_point_raw, secret_raw):
|
||||
"""Raise the base point to a given power"""
|
||||
base_point = _unpack_number(base_point_raw)
|
||||
secret = _fix_secret(_unpack_number(secret_raw))
|
||||
return _pack_number(_raw_curve25519(base_point, secret))
|
||||
|
||||
|
||||
def curve25519_base(secret_raw):
|
||||
"""Raise the generator point to a given power"""
|
||||
secret = _fix_secret(_unpack_number(secret_raw))
|
||||
return _pack_number(_raw_curve25519(9, secret))
|
||||
|
||||
|
||||
class X25519PublicKey:
|
||||
def __init__(self, x):
|
||||
self.x = x
|
||||
|
||||
@classmethod
|
||||
def from_public_bytes(cls, data):
|
||||
return cls(_unpack_number(data))
|
||||
|
||||
def public_bytes(self):
|
||||
return _pack_number(self.x)
|
||||
|
||||
|
||||
class X25519PrivateKey:
|
||||
MIN_EXEC_TIME = 0.002
|
||||
MAX_EXEC_TIME = 0.5
|
||||
DELAY_WINDOW = 10
|
||||
|
||||
T_CLEAR = None
|
||||
T_MAX = 0
|
||||
|
||||
def __init__(self, a):
|
||||
self.a = a
|
||||
|
||||
@classmethod
|
||||
def generate(cls):
|
||||
return cls.from_private_bytes(os.urandom(32))
|
||||
|
||||
@classmethod
|
||||
def from_private_bytes(cls, data):
|
||||
return cls(_fix_secret(_unpack_number(data)))
|
||||
|
||||
def private_bytes(self):
|
||||
return _pack_number(self.a)
|
||||
|
||||
def public_key(self):
|
||||
return X25519PublicKey.from_public_bytes(_pack_number(_raw_curve25519(9, self.a)))
|
||||
|
||||
def exchange(self, peer_public_key):
|
||||
if isinstance(peer_public_key, bytes):
|
||||
peer_public_key = X25519PublicKey.from_public_bytes(peer_public_key)
|
||||
|
||||
start = time.time()
|
||||
|
||||
shared = _pack_number(_raw_curve25519(peer_public_key.x, self.a))
|
||||
|
||||
end = time.time()
|
||||
duration = end-start
|
||||
|
||||
if X25519PrivateKey.T_CLEAR == None:
|
||||
X25519PrivateKey.T_CLEAR = end + X25519PrivateKey.DELAY_WINDOW
|
||||
|
||||
if end > X25519PrivateKey.T_CLEAR:
|
||||
X25519PrivateKey.T_CLEAR = end + X25519PrivateKey.DELAY_WINDOW
|
||||
X25519PrivateKey.T_MAX = 0
|
||||
|
||||
if duration < X25519PrivateKey.T_MAX or duration < X25519PrivateKey.MIN_EXEC_TIME:
|
||||
target = start+X25519PrivateKey.T_MAX
|
||||
|
||||
if target > start+X25519PrivateKey.MAX_EXEC_TIME:
|
||||
target = start+X25519PrivateKey.MAX_EXEC_TIME
|
||||
|
||||
if target < start+X25519PrivateKey.MIN_EXEC_TIME:
|
||||
target = start+X25519PrivateKey.MIN_EXEC_TIME
|
||||
|
||||
try:
|
||||
time.sleep(target-time.time())
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
elif duration > X25519PrivateKey.T_MAX:
|
||||
X25519PrivateKey.T_MAX = duration
|
||||
|
||||
return shared
|
|
@ -0,0 +1,24 @@
|
|||
import os
|
||||
import glob
|
||||
|
||||
from .Hashes import sha256
|
||||
from .Hashes import sha512
|
||||
from .HKDF import hkdf
|
||||
from .PKCS7 import PKCS7
|
||||
from .Fernet import Fernet
|
||||
from .Provider import backend
|
||||
|
||||
import RNS.Cryptography.Provider as cp
|
||||
|
||||
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
|
||||
from RNS.Cryptography.X25519 import X25519PrivateKey, X25519PublicKey
|
||||
from RNS.Cryptography.Ed25519 import Ed25519PrivateKey, Ed25519PublicKey
|
||||
|
||||
elif cp.PROVIDER == cp.PROVIDER_PYCA:
|
||||
from RNS.Cryptography.Proxies import X25519PrivateKeyProxy as X25519PrivateKey
|
||||
from RNS.Cryptography.Proxies import X25519PublicKeyProxy as X25519PublicKey
|
||||
from RNS.Cryptography.Proxies import Ed25519PrivateKeyProxy as Ed25519PrivateKey
|
||||
from RNS.Cryptography.Proxies import Ed25519PublicKeyProxy as Ed25519PublicKey
|
||||
|
||||
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
|
||||
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]
|
|
@ -0,0 +1 @@
|
|||
from .aes import AES
|
|
@ -0,0 +1,271 @@
|
|||
# MIT License
|
||||
|
||||
# Copyright (c) 2021 Or Gur Arie
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from .utils import *
|
||||
|
||||
|
||||
class AES:
|
||||
# AES-128 block size
|
||||
block_size = 16
|
||||
# AES-128 encrypts messages with 10 rounds
|
||||
_rounds = 10
|
||||
|
||||
|
||||
# initiate the AES objecy
|
||||
def __init__(self, key):
|
||||
"""
|
||||
Initializes the object with a given key.
|
||||
"""
|
||||
# make sure key length is right
|
||||
assert len(key) == AES.block_size
|
||||
|
||||
# ExpandKey
|
||||
self._round_keys = self._expand_key(key)
|
||||
|
||||
|
||||
# will perform the AES ExpandKey phase
|
||||
def _expand_key(self, master_key):
|
||||
"""
|
||||
Expands and returns a list of key matrices for the given master_key.
|
||||
"""
|
||||
|
||||
# Initialize round keys with raw key material.
|
||||
key_columns = bytes2matrix(master_key)
|
||||
iteration_size = len(master_key) // 4
|
||||
|
||||
# Each iteration has exactly as many columns as the key material.
|
||||
i = 1
|
||||
while len(key_columns) < (self._rounds + 1) * 4:
|
||||
# Copy previous word.
|
||||
word = list(key_columns[-1])
|
||||
|
||||
# Perform schedule_core once every "row".
|
||||
if len(key_columns) % iteration_size == 0:
|
||||
# Circular shift.
|
||||
word.append(word.pop(0))
|
||||
# Map to S-BOX.
|
||||
word = [s_box[b] for b in word]
|
||||
# XOR with first byte of R-CON, since the others bytes of R-CON are 0.
|
||||
word[0] ^= r_con[i]
|
||||
i += 1
|
||||
elif len(master_key) == 32 and len(key_columns) % iteration_size == 4:
|
||||
# Run word through S-box in the fourth iteration when using a
|
||||
# 256-bit key.
|
||||
word = [s_box[b] for b in word]
|
||||
|
||||
# XOR with equivalent word from previous iteration.
|
||||
word = bytes(i^j for i, j in zip(word, key_columns[-iteration_size]))
|
||||
key_columns.append(word)
|
||||
|
||||
# Group key words in 4x4 byte matrices.
|
||||
return [key_columns[4*i : 4*(i+1)] for i in range(len(key_columns) // 4)]
|
||||
|
||||
|
||||
# encrypt a single block of data with AES
|
||||
def _encrypt_block(self, plaintext):
|
||||
"""
|
||||
Encrypts a single block of 16 byte long plaintext.
|
||||
"""
|
||||
# length of a single block
|
||||
assert len(plaintext) == AES.block_size
|
||||
|
||||
# perform on a matrix
|
||||
state = bytes2matrix(plaintext)
|
||||
|
||||
# AddRoundKey
|
||||
add_round_key(state, self._round_keys[0])
|
||||
|
||||
# 9 main rounds
|
||||
for i in range(1, self._rounds):
|
||||
# SubBytes
|
||||
sub_bytes(state)
|
||||
# ShiftRows
|
||||
shift_rows(state)
|
||||
# MixCols
|
||||
mix_columns(state)
|
||||
# AddRoundKey
|
||||
add_round_key(state, self._round_keys[i])
|
||||
|
||||
# last round, w/t AddRoundKey step
|
||||
sub_bytes(state)
|
||||
shift_rows(state)
|
||||
add_round_key(state, self._round_keys[-1])
|
||||
|
||||
# return the encrypted matrix as bytes
|
||||
return matrix2bytes(state)
|
||||
|
||||
|
||||
# decrypt a single block of data with AES
|
||||
def _decrypt_block(self, ciphertext):
|
||||
"""
|
||||
Decrypts a single block of 16 byte long ciphertext.
|
||||
"""
|
||||
# length of a single block
|
||||
assert len(ciphertext) == AES.block_size
|
||||
|
||||
# perform on a matrix
|
||||
state = bytes2matrix(ciphertext)
|
||||
|
||||
# in reverse order, last round is first
|
||||
add_round_key(state, self._round_keys[-1])
|
||||
inv_shift_rows(state)
|
||||
inv_sub_bytes(state)
|
||||
|
||||
for i in range(self._rounds - 1, 0, -1):
|
||||
# nain rounds
|
||||
add_round_key(state, self._round_keys[i])
|
||||
inv_mix_columns(state)
|
||||
inv_shift_rows(state)
|
||||
inv_sub_bytes(state)
|
||||
|
||||
# initial AddRoundKey phase
|
||||
add_round_key(state, self._round_keys[0])
|
||||
|
||||
# return bytes
|
||||
return matrix2bytes(state)
|
||||
|
||||
|
||||
# will encrypt the entire data
|
||||
def encrypt(self, plaintext, iv):
|
||||
"""
|
||||
Encrypts `plaintext` using CBC mode and PKCS#7 padding, with the given
|
||||
initialization vector (iv).
|
||||
"""
|
||||
# iv length must be same as block size
|
||||
assert len(iv) == AES.block_size
|
||||
|
||||
assert len(plaintext) % AES.block_size == 0
|
||||
|
||||
ciphertext_blocks = []
|
||||
|
||||
previous = iv
|
||||
for plaintext_block in split_blocks(plaintext):
|
||||
# in CBC mode every block is XOR'd with the previous block
|
||||
xorred = xor_bytes(plaintext_block, previous)
|
||||
|
||||
# encrypt current block
|
||||
block = self._encrypt_block(xorred)
|
||||
previous = block
|
||||
|
||||
# append to ciphertext
|
||||
ciphertext_blocks.append(block)
|
||||
|
||||
# return as bytes
|
||||
return b''.join(ciphertext_blocks)
|
||||
|
||||
|
||||
# will decrypt the entire data
|
||||
def decrypt(self, ciphertext, iv):
|
||||
"""
|
||||
Decrypts `ciphertext` using CBC mode and PKCS#7 padding, with the given
|
||||
initialization vector (iv).
|
||||
"""
|
||||
# iv length must be same as block size
|
||||
assert len(iv) == AES.block_size
|
||||
|
||||
plaintext_blocks = []
|
||||
|
||||
previous = iv
|
||||
for ciphertext_block in split_blocks(ciphertext):
|
||||
# in CBC mode every block is XOR'd with the previous block
|
||||
xorred = xor_bytes(previous, self._decrypt_block(ciphertext_block))
|
||||
|
||||
# append plaintext
|
||||
plaintext_blocks.append(xorred)
|
||||
previous = ciphertext_block
|
||||
|
||||
return b''.join(plaintext_blocks)
|
||||
|
||||
|
||||
def test():
|
||||
# modules and classes requiered for test only
|
||||
import os
|
||||
class bcolors:
|
||||
OK = '\033[92m' #GREEN
|
||||
WARNING = '\033[93m' #YELLOW
|
||||
FAIL = '\033[91m' #RED
|
||||
RESET = '\033[0m' #RESET COLOR
|
||||
|
||||
# will test AES class by performing an encryption / decryption
|
||||
print("AES Tests")
|
||||
print("=========")
|
||||
|
||||
# generate a secret key and print details
|
||||
key = os.urandom(AES.block_size)
|
||||
_aes = AES(key)
|
||||
print(f"Algorithm: AES-CBC-{AES.block_size*8}")
|
||||
print(f"Secret Key: {key.hex()}")
|
||||
print()
|
||||
|
||||
# test single block encryption / decryption
|
||||
iv = os.urandom(AES.block_size)
|
||||
|
||||
single_block_text = b"SingleBlock Text"
|
||||
print("Single Block Tests")
|
||||
print("------------------")
|
||||
print(f"iv: {iv.hex()}")
|
||||
|
||||
print(f"plain text: '{single_block_text.decode()}'")
|
||||
ciphertext_block = _aes._encrypt_block(single_block_text)
|
||||
plaintext_block = _aes._decrypt_block(ciphertext_block)
|
||||
print(f"Ciphertext Hex: {ciphertext_block.hex()}")
|
||||
print(f"Plaintext: {plaintext_block.decode()}")
|
||||
assert plaintext_block == single_block_text
|
||||
print(bcolors.OK + "Single Block Test Passed Successfully" + bcolors.RESET)
|
||||
print()
|
||||
|
||||
# test a less than a block length phrase
|
||||
iv = os.urandom(AES.block_size)
|
||||
|
||||
short_text = b"Just Text"
|
||||
print("Short Text Tests")
|
||||
print("----------------")
|
||||
print(f"iv: {iv.hex()}")
|
||||
print(f"plain text: '{short_text.decode()}'")
|
||||
ciphertext_short = _aes.encrypt(short_text, iv)
|
||||
plaintext_short = _aes.decrypt(ciphertext_short, iv)
|
||||
print(f"Ciphertext Hex: {ciphertext_short.hex()}")
|
||||
print(f"Plaintext: {plaintext_short.decode()}")
|
||||
assert short_text == plaintext_short
|
||||
print(bcolors.OK + "Short Text Test Passed Successfully" + bcolors.RESET)
|
||||
print()
|
||||
|
||||
# test an arbitrary length phrase
|
||||
iv = os.urandom(AES.block_size)
|
||||
|
||||
text = b"This Text is longer than one block"
|
||||
print("Arbitrary Length Tests")
|
||||
print("----------------------")
|
||||
print(f"iv: {iv.hex()}")
|
||||
print(f"plain text: '{text.decode()}'")
|
||||
ciphertext = _aes.encrypt(text, iv)
|
||||
plaintext = _aes.decrypt(ciphertext, iv)
|
||||
print(f"Ciphertext Hex: {ciphertext.hex()}")
|
||||
print(f"Plaintext: {plaintext.decode()}")
|
||||
assert text == plaintext
|
||||
print(bcolors.OK + "Arbitrary Length Text Test Passed Successfully" + bcolors.RESET)
|
||||
print()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# test AES class
|
||||
test()
|
|
@ -0,0 +1,159 @@
|
|||
# MIT License
|
||||
|
||||
# Copyright (c) 2021 Or Gur Arie
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
'''
|
||||
Utils class for AES encryption / decryption
|
||||
'''
|
||||
|
||||
## AES lookup tables
|
||||
# resource: https://en.wikipedia.org/wiki/Rijndael_S-box
|
||||
s_box = (
|
||||
0x63, 0x7C, 0x77, 0x7B, 0xF2, 0x6B, 0x6F, 0xC5, 0x30, 0x01, 0x67, 0x2B, 0xFE, 0xD7, 0xAB, 0x76,
|
||||
0xCA, 0x82, 0xC9, 0x7D, 0xFA, 0x59, 0x47, 0xF0, 0xAD, 0xD4, 0xA2, 0xAF, 0x9C, 0xA4, 0x72, 0xC0,
|
||||
0xB7, 0xFD, 0x93, 0x26, 0x36, 0x3F, 0xF7, 0xCC, 0x34, 0xA5, 0xE5, 0xF1, 0x71, 0xD8, 0x31, 0x15,
|
||||
0x04, 0xC7, 0x23, 0xC3, 0x18, 0x96, 0x05, 0x9A, 0x07, 0x12, 0x80, 0xE2, 0xEB, 0x27, 0xB2, 0x75,
|
||||
0x09, 0x83, 0x2C, 0x1A, 0x1B, 0x6E, 0x5A, 0xA0, 0x52, 0x3B, 0xD6, 0xB3, 0x29, 0xE3, 0x2F, 0x84,
|
||||
0x53, 0xD1, 0x00, 0xED, 0x20, 0xFC, 0xB1, 0x5B, 0x6A, 0xCB, 0xBE, 0x39, 0x4A, 0x4C, 0x58, 0xCF,
|
||||
0xD0, 0xEF, 0xAA, 0xFB, 0x43, 0x4D, 0x33, 0x85, 0x45, 0xF9, 0x02, 0x7F, 0x50, 0x3C, 0x9F, 0xA8,
|
||||
0x51, 0xA3, 0x40, 0x8F, 0x92, 0x9D, 0x38, 0xF5, 0xBC, 0xB6, 0xDA, 0x21, 0x10, 0xFF, 0xF3, 0xD2,
|
||||
0xCD, 0x0C, 0x13, 0xEC, 0x5F, 0x97, 0x44, 0x17, 0xC4, 0xA7, 0x7E, 0x3D, 0x64, 0x5D, 0x19, 0x73,
|
||||
0x60, 0x81, 0x4F, 0xDC, 0x22, 0x2A, 0x90, 0x88, 0x46, 0xEE, 0xB8, 0x14, 0xDE, 0x5E, 0x0B, 0xDB,
|
||||
0xE0, 0x32, 0x3A, 0x0A, 0x49, 0x06, 0x24, 0x5C, 0xC2, 0xD3, 0xAC, 0x62, 0x91, 0x95, 0xE4, 0x79,
|
||||
0xE7, 0xC8, 0x37, 0x6D, 0x8D, 0xD5, 0x4E, 0xA9, 0x6C, 0x56, 0xF4, 0xEA, 0x65, 0x7A, 0xAE, 0x08,
|
||||
0xBA, 0x78, 0x25, 0x2E, 0x1C, 0xA6, 0xB4, 0xC6, 0xE8, 0xDD, 0x74, 0x1F, 0x4B, 0xBD, 0x8B, 0x8A,
|
||||
0x70, 0x3E, 0xB5, 0x66, 0x48, 0x03, 0xF6, 0x0E, 0x61, 0x35, 0x57, 0xB9, 0x86, 0xC1, 0x1D, 0x9E,
|
||||
0xE1, 0xF8, 0x98, 0x11, 0x69, 0xD9, 0x8E, 0x94, 0x9B, 0x1E, 0x87, 0xE9, 0xCE, 0x55, 0x28, 0xDF,
|
||||
0x8C, 0xA1, 0x89, 0x0D, 0xBF, 0xE6, 0x42, 0x68, 0x41, 0x99, 0x2D, 0x0F, 0xB0, 0x54, 0xBB, 0x16,
|
||||
)
|
||||
|
||||
inv_s_box = (
|
||||
0x52, 0x09, 0x6A, 0xD5, 0x30, 0x36, 0xA5, 0x38, 0xBF, 0x40, 0xA3, 0x9E, 0x81, 0xF3, 0xD7, 0xFB,
|
||||
0x7C, 0xE3, 0x39, 0x82, 0x9B, 0x2F, 0xFF, 0x87, 0x34, 0x8E, 0x43, 0x44, 0xC4, 0xDE, 0xE9, 0xCB,
|
||||
0x54, 0x7B, 0x94, 0x32, 0xA6, 0xC2, 0x23, 0x3D, 0xEE, 0x4C, 0x95, 0x0B, 0x42, 0xFA, 0xC3, 0x4E,
|
||||
0x08, 0x2E, 0xA1, 0x66, 0x28, 0xD9, 0x24, 0xB2, 0x76, 0x5B, 0xA2, 0x49, 0x6D, 0x8B, 0xD1, 0x25,
|
||||
0x72, 0xF8, 0xF6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xD4, 0xA4, 0x5C, 0xCC, 0x5D, 0x65, 0xB6, 0x92,
|
||||
0x6C, 0x70, 0x48, 0x50, 0xFD, 0xED, 0xB9, 0xDA, 0x5E, 0x15, 0x46, 0x57, 0xA7, 0x8D, 0x9D, 0x84,
|
||||
0x90, 0xD8, 0xAB, 0x00, 0x8C, 0xBC, 0xD3, 0x0A, 0xF7, 0xE4, 0x58, 0x05, 0xB8, 0xB3, 0x45, 0x06,
|
||||
0xD0, 0x2C, 0x1E, 0x8F, 0xCA, 0x3F, 0x0F, 0x02, 0xC1, 0xAF, 0xBD, 0x03, 0x01, 0x13, 0x8A, 0x6B,
|
||||
0x3A, 0x91, 0x11, 0x41, 0x4F, 0x67, 0xDC, 0xEA, 0x97, 0xF2, 0xCF, 0xCE, 0xF0, 0xB4, 0xE6, 0x73,
|
||||
0x96, 0xAC, 0x74, 0x22, 0xE7, 0xAD, 0x35, 0x85, 0xE2, 0xF9, 0x37, 0xE8, 0x1C, 0x75, 0xDF, 0x6E,
|
||||
0x47, 0xF1, 0x1A, 0x71, 0x1D, 0x29, 0xC5, 0x89, 0x6F, 0xB7, 0x62, 0x0E, 0xAA, 0x18, 0xBE, 0x1B,
|
||||
0xFC, 0x56, 0x3E, 0x4B, 0xC6, 0xD2, 0x79, 0x20, 0x9A, 0xDB, 0xC0, 0xFE, 0x78, 0xCD, 0x5A, 0xF4,
|
||||
0x1F, 0xDD, 0xA8, 0x33, 0x88, 0x07, 0xC7, 0x31, 0xB1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xEC, 0x5F,
|
||||
0x60, 0x51, 0x7F, 0xA9, 0x19, 0xB5, 0x4A, 0x0D, 0x2D, 0xE5, 0x7A, 0x9F, 0x93, 0xC9, 0x9C, 0xEF,
|
||||
0xA0, 0xE0, 0x3B, 0x4D, 0xAE, 0x2A, 0xF5, 0xB0, 0xC8, 0xEB, 0xBB, 0x3C, 0x83, 0x53, 0x99, 0x61,
|
||||
0x17, 0x2B, 0x04, 0x7E, 0xBA, 0x77, 0xD6, 0x26, 0xE1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0C, 0x7D,
|
||||
)
|
||||
|
||||
|
||||
## AES AddRoundKey
|
||||
# Round constants https://en.wikipedia.org/wiki/AES_key_schedule#Round_constants
|
||||
r_con = (
|
||||
0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40,
|
||||
0x80, 0x1B, 0x36, 0x6C, 0xD8, 0xAB, 0x4D, 0x9A,
|
||||
0x2F, 0x5E, 0xBC, 0x63, 0xC6, 0x97, 0x35, 0x6A,
|
||||
0xD4, 0xB3, 0x7D, 0xFA, 0xEF, 0xC5, 0x91, 0x39,
|
||||
)
|
||||
|
||||
def add_round_key(s, k):
|
||||
for i in range(4):
|
||||
for j in range(4):
|
||||
s[i][j] ^= k[i][j]
|
||||
|
||||
|
||||
## AES SubBytes
|
||||
def sub_bytes(s):
|
||||
for i in range(4):
|
||||
for j in range(4):
|
||||
s[i][j] = s_box[s[i][j]]
|
||||
|
||||
|
||||
def inv_sub_bytes(s):
|
||||
for i in range(4):
|
||||
for j in range(4):
|
||||
s[i][j] = inv_s_box[s[i][j]]
|
||||
|
||||
|
||||
## AES ShiftRows
|
||||
def shift_rows(s):
|
||||
s[0][1], s[1][1], s[2][1], s[3][1] = s[1][1], s[2][1], s[3][1], s[0][1]
|
||||
s[0][2], s[1][2], s[2][2], s[3][2] = s[2][2], s[3][2], s[0][2], s[1][2]
|
||||
s[0][3], s[1][3], s[2][3], s[3][3] = s[3][3], s[0][3], s[1][3], s[2][3]
|
||||
|
||||
|
||||
def inv_shift_rows(s):
|
||||
s[0][1], s[1][1], s[2][1], s[3][1] = s[3][1], s[0][1], s[1][1], s[2][1]
|
||||
s[0][2], s[1][2], s[2][2], s[3][2] = s[2][2], s[3][2], s[0][2], s[1][2]
|
||||
s[0][3], s[1][3], s[2][3], s[3][3] = s[1][3], s[2][3], s[3][3], s[0][3]
|
||||
|
||||
|
||||
## AES MixColumns
|
||||
# learned from http://cs.ucsb.edu/~koc/cs178/projects/JT/aes.c
|
||||
xtime = lambda a: (((a << 1) ^ 0x1B) & 0xFF) if (a & 0x80) else (a << 1)
|
||||
|
||||
|
||||
def mix_single_column(a):
|
||||
# see Sec 4.1.2 in The Design of Rijndael
|
||||
t = a[0] ^ a[1] ^ a[2] ^ a[3]
|
||||
u = a[0]
|
||||
a[0] ^= t ^ xtime(a[0] ^ a[1])
|
||||
a[1] ^= t ^ xtime(a[1] ^ a[2])
|
||||
a[2] ^= t ^ xtime(a[2] ^ a[3])
|
||||
a[3] ^= t ^ xtime(a[3] ^ u)
|
||||
|
||||
|
||||
def mix_columns(s):
|
||||
for i in range(4):
|
||||
mix_single_column(s[i])
|
||||
|
||||
|
||||
def inv_mix_columns(s):
|
||||
# see Sec 4.1.3 in The Design of Rijndael
|
||||
for i in range(4):
|
||||
u = xtime(xtime(s[i][0] ^ s[i][2]))
|
||||
v = xtime(xtime(s[i][1] ^ s[i][3]))
|
||||
s[i][0] ^= u
|
||||
s[i][1] ^= v
|
||||
s[i][2] ^= u
|
||||
s[i][3] ^= v
|
||||
|
||||
mix_columns(s)
|
||||
|
||||
|
||||
## AES Bytes
|
||||
def bytes2matrix(text):
|
||||
""" Converts a 16-byte array into a 4x4 matrix. """
|
||||
return [list(text[i:i+4]) for i in range(0, len(text), 4)]
|
||||
|
||||
def matrix2bytes(matrix):
|
||||
""" Converts a 4x4 matrix into a 16-byte array. """
|
||||
return bytes(sum(matrix, []))
|
||||
|
||||
|
||||
def xor_bytes(a, b):
|
||||
""" Returns a new byte array with the elements xor'ed. """
|
||||
return bytes(i^j for i, j in zip(a, b))
|
||||
|
||||
|
||||
def split_blocks(message, block_size=16, require_padding=True):
|
||||
assert len(message) % block_size == 0 or not require_padding
|
||||
return [message[i:i+16] for i in range(0, len(message), block_size)]
|
|
@ -0,0 +1,58 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2015 Brian Warner and other contributors
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from . import eddsa
|
||||
|
||||
class BadSignatureError(Exception):
|
||||
pass
|
||||
|
||||
SECRETKEYBYTES = 64
|
||||
PUBLICKEYBYTES = 32
|
||||
SIGNATUREKEYBYTES = 64
|
||||
|
||||
def publickey(seed32):
|
||||
assert len(seed32) == 32
|
||||
vk32 = eddsa.publickey(seed32)
|
||||
return vk32, seed32+vk32
|
||||
|
||||
def sign(msg, skvk):
|
||||
assert len(skvk) == 64
|
||||
sk = skvk[:32]
|
||||
vk = skvk[32:]
|
||||
sig = eddsa.signature(msg, sk, vk)
|
||||
return sig+msg
|
||||
|
||||
def open(sigmsg, vk):
|
||||
assert len(vk) == 32
|
||||
sig = sigmsg[:64]
|
||||
msg = sigmsg[64:]
|
||||
try:
|
||||
valid = eddsa.checkvalid(sig, msg, vk)
|
||||
except ValueError as e:
|
||||
raise BadSignatureError(e)
|
||||
except Exception as e:
|
||||
if str(e) == "decoding point that is not on curve":
|
||||
raise BadSignatureError(e)
|
||||
raise
|
||||
if not valid:
|
||||
raise BadSignatureError()
|
||||
return msg
|
|
@ -0,0 +1,368 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2015 Brian Warner and other contributors
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import binascii, hashlib, itertools
|
||||
|
||||
Q = 2**255 - 19
|
||||
L = 2**252 + 27742317777372353535851937790883648493
|
||||
|
||||
def inv(x):
|
||||
return pow(x, Q-2, Q)
|
||||
|
||||
d = -121665 * inv(121666)
|
||||
I = pow(2,(Q-1)//4,Q)
|
||||
|
||||
def xrecover(y):
|
||||
xx = (y*y-1) * inv(d*y*y+1)
|
||||
x = pow(xx,(Q+3)//8,Q)
|
||||
if (x*x - xx) % Q != 0: x = (x*I) % Q
|
||||
if x % 2 != 0: x = Q-x
|
||||
return x
|
||||
|
||||
By = 4 * inv(5)
|
||||
Bx = xrecover(By)
|
||||
B = [Bx % Q,By % Q]
|
||||
|
||||
# Extended Coordinates: x=X/Z, y=Y/Z, x*y=T/Z
|
||||
# http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1.html
|
||||
|
||||
def xform_affine_to_extended(pt):
|
||||
(x, y) = pt
|
||||
return (x%Q, y%Q, 1, (x*y)%Q) # (X,Y,Z,T)
|
||||
|
||||
def xform_extended_to_affine(pt):
|
||||
(x, y, z, _) = pt
|
||||
return ((x*inv(z))%Q, (y*inv(z))%Q)
|
||||
|
||||
def double_element(pt): # extended->extended
|
||||
# dbl-2008-hwcd
|
||||
(X1, Y1, Z1, _) = pt
|
||||
A = (X1*X1)
|
||||
B = (Y1*Y1)
|
||||
C = (2*Z1*Z1)
|
||||
D = (-A) % Q
|
||||
J = (X1+Y1) % Q
|
||||
E = (J*J-A-B) % Q
|
||||
G = (D+B) % Q
|
||||
F = (G-C) % Q
|
||||
H = (D-B) % Q
|
||||
X3 = (E*F) % Q
|
||||
Y3 = (G*H) % Q
|
||||
Z3 = (F*G) % Q
|
||||
T3 = (E*H) % Q
|
||||
return (X3, Y3, Z3, T3)
|
||||
|
||||
def add_elements(pt1, pt2): # extended->extended
|
||||
# add-2008-hwcd-3 . Slightly slower than add-2008-hwcd-4, but -3 is
|
||||
# unified, so it's safe for general-purpose addition
|
||||
(X1, Y1, Z1, T1) = pt1
|
||||
(X2, Y2, Z2, T2) = pt2
|
||||
A = ((Y1-X1)*(Y2-X2)) % Q
|
||||
B = ((Y1+X1)*(Y2+X2)) % Q
|
||||
C = T1*(2*d)*T2 % Q
|
||||
D = Z1*2*Z2 % Q
|
||||
E = (B-A) % Q
|
||||
F = (D-C) % Q
|
||||
G = (D+C) % Q
|
||||
H = (B+A) % Q
|
||||
X3 = (E*F) % Q
|
||||
Y3 = (G*H) % Q
|
||||
T3 = (E*H) % Q
|
||||
Z3 = (F*G) % Q
|
||||
return (X3, Y3, Z3, T3)
|
||||
|
||||
def scalarmult_element_safe_slow(pt, n):
|
||||
# this form is slightly slower, but tolerates arbitrary points, including
|
||||
# those which are not in the main 1*L subgroup. This includes points of
|
||||
# order 1 (the neutral element Zero), 2, 4, and 8.
|
||||
assert n >= 0
|
||||
if n==0:
|
||||
return xform_affine_to_extended((0,1))
|
||||
_ = double_element(scalarmult_element_safe_slow(pt, n>>1))
|
||||
return add_elements(_, pt) if n&1 else _
|
||||
|
||||
def _add_elements_nonunfied(pt1, pt2): # extended->extended
|
||||
# add-2008-hwcd-4 : NOT unified, only for pt1!=pt2. About 10% faster than
|
||||
# the (unified) add-2008-hwcd-3, and safe to use inside scalarmult if you
|
||||
# aren't using points of order 1/2/4/8
|
||||
(X1, Y1, Z1, T1) = pt1
|
||||
(X2, Y2, Z2, T2) = pt2
|
||||
A = ((Y1-X1)*(Y2+X2)) % Q
|
||||
B = ((Y1+X1)*(Y2-X2)) % Q
|
||||
C = (Z1*2*T2) % Q
|
||||
D = (T1*2*Z2) % Q
|
||||
E = (D+C) % Q
|
||||
F = (B-A) % Q
|
||||
G = (B+A) % Q
|
||||
H = (D-C) % Q
|
||||
X3 = (E*F) % Q
|
||||
Y3 = (G*H) % Q
|
||||
Z3 = (F*G) % Q
|
||||
T3 = (E*H) % Q
|
||||
return (X3, Y3, Z3, T3)
|
||||
|
||||
def scalarmult_element(pt, n): # extended->extended
|
||||
# This form only works properly when given points that are a member of
|
||||
# the main 1*L subgroup. It will give incorrect answers when called with
|
||||
# the points of order 1/2/4/8, including point Zero. (it will also work
|
||||
# properly when given points of order 2*L/4*L/8*L)
|
||||
assert n >= 0
|
||||
if n==0:
|
||||
return xform_affine_to_extended((0,1))
|
||||
_ = double_element(scalarmult_element(pt, n>>1))
|
||||
return _add_elements_nonunfied(_, pt) if n&1 else _
|
||||
|
||||
# points are encoded as 32-bytes little-endian, b255 is sign, b2b1b0 are 0
|
||||
|
||||
def encodepoint(P):
|
||||
x = P[0]
|
||||
y = P[1]
|
||||
# MSB of output equals x.b0 (=x&1)
|
||||
# rest of output is little-endian y
|
||||
assert 0 <= y < (1<<255) # always < 0x7fff..ff
|
||||
if x & 1:
|
||||
y += 1<<255
|
||||
return binascii.unhexlify("%064x" % y)[::-1]
|
||||
|
||||
def isoncurve(P):
|
||||
x = P[0]
|
||||
y = P[1]
|
||||
return (-x*x + y*y - 1 - d*x*x*y*y) % Q == 0
|
||||
|
||||
class NotOnCurve(Exception):
|
||||
pass
|
||||
|
||||
def decodepoint(s):
|
||||
unclamped = int(binascii.hexlify(s[:32][::-1]), 16)
|
||||
clamp = (1 << 255) - 1
|
||||
y = unclamped & clamp # clear MSB
|
||||
x = xrecover(y)
|
||||
if bool(x & 1) != bool(unclamped & (1<<255)): x = Q-x
|
||||
P = [x,y]
|
||||
if not isoncurve(P): raise NotOnCurve("decoding point that is not on curve")
|
||||
return P
|
||||
|
||||
# scalars are encoded as 32-bytes little-endian
|
||||
|
||||
def bytes_to_scalar(s):
|
||||
assert len(s) == 32, len(s)
|
||||
return int(binascii.hexlify(s[::-1]), 16)
|
||||
|
||||
def bytes_to_clamped_scalar(s):
|
||||
# Ed25519 private keys clamp the scalar to ensure two things:
|
||||
# 1: integer value is in L/2 .. L, to avoid small-logarithm
|
||||
# non-wraparaound
|
||||
# 2: low-order 3 bits are zero, so a small-subgroup attack won't learn
|
||||
# any information
|
||||
# set the top two bits to 01, and the bottom three to 000
|
||||
a_unclamped = bytes_to_scalar(s)
|
||||
AND_CLAMP = (1<<254) - 1 - 7
|
||||
OR_CLAMP = (1<<254)
|
||||
a_clamped = (a_unclamped & AND_CLAMP) | OR_CLAMP
|
||||
return a_clamped
|
||||
|
||||
def random_scalar(entropy_f): # 0..L-1 inclusive
|
||||
# reduce the bias to a safe level by generating 256 extra bits
|
||||
oversized = int(binascii.hexlify(entropy_f(32+32)), 16)
|
||||
return oversized % L
|
||||
|
||||
def password_to_scalar(pw):
|
||||
oversized = hashlib.sha512(pw).digest()
|
||||
return int(binascii.hexlify(oversized), 16) % L
|
||||
|
||||
def scalar_to_bytes(y):
|
||||
y = y % L
|
||||
assert 0 <= y < 2**256
|
||||
return binascii.unhexlify("%064x" % y)[::-1]
|
||||
|
||||
# Elements, of various orders
|
||||
|
||||
def is_extended_zero(XYTZ):
|
||||
# catch Zero
|
||||
(X, Y, Z, T) = XYTZ
|
||||
Y = Y % Q
|
||||
Z = Z % Q
|
||||
if X==0 and Y==Z and Y!=0:
|
||||
return True
|
||||
return False
|
||||
|
||||
class ElementOfUnknownGroup:
|
||||
# This is used for points of order 2,4,8,2*L,4*L,8*L
|
||||
def __init__(self, XYTZ):
|
||||
assert isinstance(XYTZ, tuple)
|
||||
assert len(XYTZ) == 4
|
||||
self.XYTZ = XYTZ
|
||||
|
||||
def add(self, other):
|
||||
if not isinstance(other, ElementOfUnknownGroup):
|
||||
raise TypeError("elements can only be added to other elements")
|
||||
sum_XYTZ = add_elements(self.XYTZ, other.XYTZ)
|
||||
if is_extended_zero(sum_XYTZ):
|
||||
return Zero
|
||||
return ElementOfUnknownGroup(sum_XYTZ)
|
||||
|
||||
def scalarmult(self, s):
|
||||
if isinstance(s, ElementOfUnknownGroup):
|
||||
raise TypeError("elements cannot be multiplied together")
|
||||
assert s >= 0
|
||||
product = scalarmult_element_safe_slow(self.XYTZ, s)
|
||||
return ElementOfUnknownGroup(product)
|
||||
|
||||
def to_bytes(self):
|
||||
return encodepoint(xform_extended_to_affine(self.XYTZ))
|
||||
def __eq__(self, other):
|
||||
return self.to_bytes() == other.to_bytes()
|
||||
def __ne__(self, other):
|
||||
return not self == other
|
||||
|
||||
class Element(ElementOfUnknownGroup):
|
||||
# this only holds elements in the main 1*L subgroup. It never holds Zero,
|
||||
# or elements of order 1/2/4/8, or 2*L/4*L/8*L.
|
||||
|
||||
def add(self, other):
|
||||
if not isinstance(other, ElementOfUnknownGroup):
|
||||
raise TypeError("elements can only be added to other elements")
|
||||
sum_element = ElementOfUnknownGroup.add(self, other)
|
||||
if sum_element is Zero:
|
||||
return sum_element
|
||||
if isinstance(other, Element):
|
||||
# adding two subgroup elements results in another subgroup
|
||||
# element, or Zero, and we've already excluded Zero
|
||||
return Element(sum_element.XYTZ)
|
||||
# not necessarily a subgroup member, so assume not
|
||||
return sum_element
|
||||
|
||||
def scalarmult(self, s):
|
||||
if isinstance(s, ElementOfUnknownGroup):
|
||||
raise TypeError("elements cannot be multiplied together")
|
||||
# scalarmult of subgroup members can be done modulo the subgroup
|
||||
# order, and using the faster non-unified function.
|
||||
s = s % L
|
||||
# scalarmult(s=0) gets you Zero
|
||||
if s == 0:
|
||||
return Zero
|
||||
# scalarmult(s=1) gets you self, which is a subgroup member
|
||||
# scalarmult(s<grouporder) gets you a different subgroup member
|
||||
return Element(scalarmult_element(self.XYTZ, s))
|
||||
|
||||
# negation and subtraction only make sense for the main subgroup
|
||||
def negate(self):
|
||||
# slow. Prefer e.scalarmult(-pw) to e.scalarmult(pw).negate()
|
||||
return Element(scalarmult_element(self.XYTZ, L-2))
|
||||
def subtract(self, other):
|
||||
return self.add(other.negate())
|
||||
|
||||
class _ZeroElement(ElementOfUnknownGroup):
|
||||
def add(self, other):
|
||||
return other # zero+anything = anything
|
||||
def scalarmult(self, s):
|
||||
return self # zero*anything = zero
|
||||
def negate(self):
|
||||
return self # -zero = zero
|
||||
def subtract(self, other):
|
||||
return self.add(other.negate())
|
||||
|
||||
|
||||
Base = Element(xform_affine_to_extended(B))
|
||||
Zero = _ZeroElement(xform_affine_to_extended((0,1))) # the neutral (identity) element
|
||||
|
||||
_zero_bytes = Zero.to_bytes()
|
||||
|
||||
|
||||
def arbitrary_element(seed): # unknown DL
|
||||
# TODO: if we don't need uniformity, maybe use just sha256 here?
|
||||
hseed = hashlib.sha512(seed).digest()
|
||||
y = int(binascii.hexlify(hseed), 16) % Q
|
||||
|
||||
# we try successive Y values until we find a valid point
|
||||
for plus in itertools.count(0):
|
||||
y_plus = (y + plus) % Q
|
||||
x = xrecover(y_plus)
|
||||
Pa = [x,y_plus] # no attempt to use both "positive" and "negative" X
|
||||
|
||||
# only about 50% of Y coordinates map to valid curve points (I think
|
||||
# the other half give you points on the "twist").
|
||||
if not isoncurve(Pa):
|
||||
continue
|
||||
|
||||
P = ElementOfUnknownGroup(xform_affine_to_extended(Pa))
|
||||
# even if the point is on our curve, it may not be in our particular
|
||||
# (order=L) subgroup. The curve has order 8*L, so an arbitrary point
|
||||
# could have order 1,2,4,8,1*L,2*L,4*L,8*L (everything which divides
|
||||
# the group order).
|
||||
|
||||
# [I MAY BE COMPLETELY WRONG ABOUT THIS, but my brief statistical
|
||||
# tests suggest it's not too far off] There are phi(x) points with
|
||||
# order x, so:
|
||||
# 1 element of order 1: [(x=0,y=1)=Zero]
|
||||
# 1 element of order 2 [(x=0,y=-1)]
|
||||
# 2 elements of order 4
|
||||
# 4 elements of order 8
|
||||
# L-1 elements of order L (including Base)
|
||||
# L-1 elements of order 2*L
|
||||
# 2*(L-1) elements of order 4*L
|
||||
# 4*(L-1) elements of order 8*L
|
||||
|
||||
# So 50% of random points will have order 8*L, 25% will have order
|
||||
# 4*L, 13% order 2*L, and 13% will have our desired order 1*L (and a
|
||||
# vanishingly small fraction will have 1/2/4/8). If we multiply any
|
||||
# of the 8*L points by 2, we're sure to get an 4*L point (and
|
||||
# multiplying a 4*L point by 2 gives us a 2*L point, and so on).
|
||||
# Multiplying a 1*L point by 2 gives us a different 1*L point. So
|
||||
# multiplying by 8 gets us from almost any point into a uniform point
|
||||
# on the correct 1*L subgroup.
|
||||
|
||||
P8 = P.scalarmult(8)
|
||||
|
||||
# if we got really unlucky and picked one of the 8 low-order points,
|
||||
# multiplying by 8 will get us to the identity (Zero), which we check
|
||||
# for explicitly.
|
||||
if is_extended_zero(P8.XYTZ):
|
||||
continue
|
||||
|
||||
# Test that we're finally in the right group. We want to scalarmult
|
||||
# by L, and we want to *not* use the trick in Group.scalarmult()
|
||||
# which does x%L, because that would bypass the check we care about.
|
||||
# P is still an _ElementOfUnknownGroup, which doesn't use x%L because
|
||||
# that's not correct for points outside the main group.
|
||||
assert is_extended_zero(P8.scalarmult(L).XYTZ)
|
||||
|
||||
return Element(P8.XYTZ)
|
||||
# never reached
|
||||
|
||||
def bytes_to_unknown_group_element(bytes):
|
||||
# this accepts all elements, including Zero and wrong-subgroup ones
|
||||
if bytes == _zero_bytes:
|
||||
return Zero
|
||||
XYTZ = xform_affine_to_extended(decodepoint(bytes))
|
||||
return ElementOfUnknownGroup(XYTZ)
|
||||
|
||||
def bytes_to_element(bytes):
|
||||
# this strictly only accepts elements in the right subgroup
|
||||
P = bytes_to_unknown_group_element(bytes)
|
||||
if P is Zero:
|
||||
raise ValueError("element was Zero")
|
||||
if not is_extended_zero(P.scalarmult(L).XYTZ):
|
||||
raise ValueError("element is not in the right group")
|
||||
# the point is in the expected 1*L subgroup, not in the 2/4/8 groups,
|
||||
# or in the 2*L/4*L/8*L groups. Promote it to a correct-group Element.
|
||||
return Element(P.XYTZ)
|
|
@ -0,0 +1,213 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2015 Brian Warner and other contributors
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import os
|
||||
import base64
|
||||
from . import _ed25519
|
||||
BadSignatureError = _ed25519.BadSignatureError
|
||||
|
||||
def create_keypair(entropy=os.urandom):
|
||||
SEEDLEN = int(_ed25519.SECRETKEYBYTES/2)
|
||||
assert SEEDLEN == 32
|
||||
seed = entropy(SEEDLEN)
|
||||
sk = SigningKey(seed)
|
||||
vk = sk.get_verifying_key()
|
||||
return sk, vk
|
||||
|
||||
class BadPrefixError(Exception):
|
||||
pass
|
||||
|
||||
def remove_prefix(s_bytes, prefix):
|
||||
assert(type(s_bytes) == type(prefix))
|
||||
if s_bytes[:len(prefix)] != prefix:
|
||||
raise BadPrefixError("did not see expected '%s' prefix" % (prefix,))
|
||||
return s_bytes[len(prefix):]
|
||||
|
||||
def to_ascii(s_bytes, prefix="", encoding="base64"):
|
||||
"""Return a version-prefixed ASCII representation of the given binary
|
||||
string. 'encoding' indicates how to do the encoding, and can be one of:
|
||||
* base64
|
||||
* base32
|
||||
* base16 (or hex)
|
||||
|
||||
This function handles bytes, not bits, so it does not append any trailing
|
||||
'=' (unlike standard base64.b64encode). It also lowercases the base32
|
||||
output.
|
||||
|
||||
'prefix' will be prepended to the encoded form, and is useful for
|
||||
distinguishing the purpose and version of the binary string. E.g. you
|
||||
could prepend 'pub0-' to a VerifyingKey string to allow the receiving
|
||||
code to raise a useful error if someone pasted in a signature string by
|
||||
mistake.
|
||||
"""
|
||||
assert isinstance(s_bytes, bytes)
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
if encoding == "base64":
|
||||
s_ascii = base64.b64encode(s_bytes).decode('ascii').rstrip("=")
|
||||
elif encoding == "base32":
|
||||
s_ascii = base64.b32encode(s_bytes).decode('ascii').rstrip("=").lower()
|
||||
elif encoding in ("base16", "hex"):
|
||||
s_ascii = base64.b16encode(s_bytes).decode('ascii').lower()
|
||||
else:
|
||||
raise NotImplementedError
|
||||
return prefix+s_ascii.encode('ascii')
|
||||
|
||||
def from_ascii(s_ascii, prefix="", encoding="base64"):
|
||||
"""This is the opposite of to_ascii. It will throw BadPrefixError if
|
||||
the prefix is not found.
|
||||
"""
|
||||
if isinstance(s_ascii, bytes):
|
||||
s_ascii = s_ascii.decode('ascii')
|
||||
if isinstance(prefix, bytes):
|
||||
prefix = prefix.decode('ascii')
|
||||
s_ascii = remove_prefix(s_ascii.strip(), prefix)
|
||||
if encoding == "base64":
|
||||
s_ascii += "="*((4 - len(s_ascii)%4)%4)
|
||||
s_bytes = base64.b64decode(s_ascii)
|
||||
elif encoding == "base32":
|
||||
s_ascii += "="*((8 - len(s_ascii)%8)%8)
|
||||
s_bytes = base64.b32decode(s_ascii.upper())
|
||||
elif encoding in ("base16", "hex"):
|
||||
s_bytes = base64.b16decode(s_ascii.upper())
|
||||
else:
|
||||
raise NotImplementedError
|
||||
return s_bytes
|
||||
|
||||
class SigningKey(object):
|
||||
# this can only be used to reconstruct a key created by create_keypair().
|
||||
def __init__(self, sk_s, prefix="", encoding=None):
|
||||
assert isinstance(sk_s, bytes)
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
sk_s = remove_prefix(sk_s, prefix)
|
||||
if encoding is not None:
|
||||
sk_s = from_ascii(sk_s, encoding=encoding)
|
||||
if len(sk_s) == 32:
|
||||
# create from seed
|
||||
vk_s, sk_s = _ed25519.publickey(sk_s)
|
||||
else:
|
||||
if len(sk_s) != 32+32:
|
||||
raise ValueError("SigningKey takes 32-byte seed or 64-byte string")
|
||||
self.sk_s = sk_s # seed+pubkey
|
||||
self.vk_s = sk_s[32:] # just pubkey
|
||||
|
||||
def to_bytes(self, prefix=""):
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
return prefix+self.sk_s
|
||||
|
||||
def to_ascii(self, prefix="", encoding=None):
|
||||
assert encoding
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
return to_ascii(self.to_seed(), prefix, encoding)
|
||||
|
||||
def to_seed(self, prefix=""):
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
return prefix+self.sk_s[:32]
|
||||
|
||||
def __eq__(self, them):
|
||||
if not isinstance(them, object): return False
|
||||
return (them.__class__ == self.__class__
|
||||
and them.sk_s == self.sk_s)
|
||||
|
||||
def get_verifying_key(self):
|
||||
return VerifyingKey(self.vk_s)
|
||||
|
||||
def sign(self, msg, prefix="", encoding=None):
|
||||
assert isinstance(msg, bytes)
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
sig_and_msg = _ed25519.sign(msg, self.sk_s)
|
||||
# the response is R+S+msg
|
||||
sig_R = sig_and_msg[0:32]
|
||||
sig_S = sig_and_msg[32:64]
|
||||
msg_out = sig_and_msg[64:]
|
||||
sig_out = sig_R + sig_S
|
||||
assert msg_out == msg
|
||||
if encoding:
|
||||
return to_ascii(sig_out, prefix, encoding)
|
||||
return prefix+sig_out
|
||||
|
||||
class VerifyingKey(object):
|
||||
def __init__(self, vk_s, prefix="", encoding=None):
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
if not isinstance(vk_s, bytes):
|
||||
vk_s = vk_s.encode('ascii')
|
||||
assert isinstance(vk_s, bytes)
|
||||
vk_s = remove_prefix(vk_s, prefix)
|
||||
if encoding is not None:
|
||||
vk_s = from_ascii(vk_s, encoding=encoding)
|
||||
|
||||
assert len(vk_s) == 32
|
||||
self.vk_s = vk_s
|
||||
|
||||
def to_bytes(self, prefix=""):
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
return prefix+self.vk_s
|
||||
|
||||
def to_ascii(self, prefix="", encoding=None):
|
||||
assert encoding
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
return to_ascii(self.vk_s, prefix, encoding)
|
||||
|
||||
def __eq__(self, them):
|
||||
if not isinstance(them, object): return False
|
||||
return (them.__class__ == self.__class__
|
||||
and them.vk_s == self.vk_s)
|
||||
|
||||
def verify(self, sig, msg, prefix="", encoding=None):
|
||||
if not isinstance(sig, bytes):
|
||||
sig = sig.encode('ascii')
|
||||
if not isinstance(prefix, bytes):
|
||||
prefix = prefix.encode('ascii')
|
||||
assert isinstance(sig, bytes)
|
||||
assert isinstance(msg, bytes)
|
||||
if encoding:
|
||||
sig = from_ascii(sig, prefix, encoding)
|
||||
else:
|
||||
sig = remove_prefix(sig, prefix)
|
||||
assert len(sig) == 64
|
||||
sig_R = sig[:32]
|
||||
sig_S = sig[32:]
|
||||
sig_and_msg = sig_R + sig_S + msg
|
||||
# this might raise BadSignatureError
|
||||
msg2 = _ed25519.open(sig_and_msg, self.vk_s)
|
||||
assert msg2 == msg
|
||||
|
||||
def selftest():
|
||||
message = b"crypto libraries should always test themselves at powerup"
|
||||
sk = SigningKey(b"priv0-VIsfn5OFGa09Un2MR6Hm7BQ5++xhcQskU2OGXG8jSJl4cWLZrRrVcSN2gVYMGtZT+3354J5jfmqAcuRSD9KIyg",
|
||||
prefix="priv0-", encoding="base64")
|
||||
vk = VerifyingKey(b"pub0-eHFi2a0a1XEjdoFWDBrWU/t9+eCeY35qgHLkUg/SiMo",
|
||||
prefix="pub0-", encoding="base64")
|
||||
assert sk.get_verifying_key() == vk
|
||||
sig = sk.sign(message, prefix="sig0-", encoding="base64")
|
||||
assert sig == b"sig0-E/QrwtSF52x8+q0l4ahA7eJbRKc777ClKNg217Q0z4fiYMCdmAOI+rTLVkiFhX6k3D+wQQfKdJYMxaTUFfv1DQ", sig
|
||||
vk.verify(sig, message, prefix="sig0-", encoding="base64")
|
||||
|
||||
selftest()
|
|
@ -0,0 +1,94 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2015 Brian Warner and other contributors
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from RNS.Cryptography.Hashes import sha512
|
||||
from .basic import (bytes_to_clamped_scalar,
|
||||
bytes_to_scalar, scalar_to_bytes,
|
||||
bytes_to_element, Base)
|
||||
import hashlib, binascii
|
||||
|
||||
def H(m):
|
||||
return sha512(m)
|
||||
|
||||
def publickey(seed):
|
||||
# turn first half of SHA512(seed) into scalar, then into point
|
||||
assert len(seed) == 32
|
||||
a = bytes_to_clamped_scalar(H(seed)[:32])
|
||||
A = Base.scalarmult(a)
|
||||
return A.to_bytes()
|
||||
|
||||
def Hint(m):
|
||||
h = H(m)
|
||||
return int(binascii.hexlify(h[::-1]), 16)
|
||||
|
||||
def signature(m,sk,pk):
|
||||
assert len(sk) == 32 # seed
|
||||
assert len(pk) == 32
|
||||
h = H(sk[:32])
|
||||
a_bytes, inter = h[:32], h[32:]
|
||||
a = bytes_to_clamped_scalar(a_bytes)
|
||||
r = Hint(inter + m)
|
||||
R = Base.scalarmult(r)
|
||||
R_bytes = R.to_bytes()
|
||||
S = r + Hint(R_bytes + pk + m) * a
|
||||
return R_bytes + scalar_to_bytes(S)
|
||||
|
||||
def checkvalid(s, m, pk):
|
||||
if len(s) != 64: raise Exception("signature length is wrong")
|
||||
if len(pk) != 32: raise Exception("public-key length is wrong")
|
||||
R = bytes_to_element(s[:32])
|
||||
A = bytes_to_element(pk)
|
||||
S = bytes_to_scalar(s[32:])
|
||||
h = Hint(s[:32] + pk + m)
|
||||
v1 = Base.scalarmult(S)
|
||||
v2 = R.add(A.scalarmult(h))
|
||||
return v1==v2
|
||||
|
||||
# wrappers
|
||||
|
||||
import os
|
||||
|
||||
def create_signing_key():
|
||||
seed = os.urandom(32)
|
||||
return seed
|
||||
|
||||
def create_verifying_key(signing_key):
|
||||
return publickey(signing_key)
|
||||
|
||||
def sign(skbytes, msg):
|
||||
"""Return just the signature, given the message and just the secret
|
||||
key."""
|
||||
if len(skbytes) != 32:
|
||||
raise ValueError("Bad signing key length %d" % len(skbytes))
|
||||
vkbytes = create_verifying_key(skbytes)
|
||||
sig = signature(msg, skbytes, vkbytes)
|
||||
return sig
|
||||
|
||||
def verify(vkbytes, sig, msg):
|
||||
if len(vkbytes) != 32:
|
||||
raise ValueError("Bad verifying key length %d" % len(vkbytes))
|
||||
if len(sig) != 64:
|
||||
raise ValueError("Bad signature length %d" % len(sig))
|
||||
rc = checkvalid(sig, msg, vkbytes)
|
||||
if not rc:
|
||||
raise ValueError("rc != 0", rc)
|
||||
return True
|
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -20,14 +20,11 @@
|
|||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import base64
|
||||
import math
|
||||
import time
|
||||
import RNS
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
from cryptography.hazmat.primitives import hashes
|
||||
from cryptography.hazmat.backends import default_backend
|
||||
from RNS.Cryptography import Fernet
|
||||
|
||||
class Callbacks:
|
||||
def __init__(self):
|
||||
|
@ -72,8 +69,10 @@ class Destination:
|
|||
OUT = 0x12;
|
||||
directions = [IN, OUT]
|
||||
|
||||
PR_TAG_WINDOW = 30
|
||||
|
||||
@staticmethod
|
||||
def full_name(app_name, *aspects):
|
||||
def expand_name(identity, app_name, *aspects):
|
||||
"""
|
||||
:returns: A string containing the full human-readable name of the destination, for an app_name and a number of aspects.
|
||||
"""
|
||||
|
@ -84,23 +83,30 @@ class Destination:
|
|||
name = app_name
|
||||
for aspect in aspects:
|
||||
if "." in aspect: raise ValueError("Dots can't be used in aspects")
|
||||
name = name + "." + aspect
|
||||
name += "." + aspect
|
||||
|
||||
if identity != None:
|
||||
name += "." + identity.hexhash
|
||||
|
||||
return name
|
||||
|
||||
|
||||
@staticmethod
|
||||
def hash(app_name, *aspects):
|
||||
def hash(identity, app_name, *aspects):
|
||||
"""
|
||||
:returns: A destination name in adressable hash form, for an app_name and a number of aspects.
|
||||
"""
|
||||
name = Destination.full_name(app_name, *aspects)
|
||||
name_hash = RNS.Identity.full_hash(Destination.expand_name(None, app_name, *aspects).encode("utf-8"))[:(RNS.Identity.NAME_HASH_LENGTH//8)]
|
||||
addr_hash_material = name_hash
|
||||
if identity != None:
|
||||
if isinstance(identity, RNS.Identity):
|
||||
addr_hash_material += identity.hash
|
||||
elif isinstance(identity, bytes) and len(identity) == RNS.Reticulum.TRUNCATED_HASHLENGTH//8:
|
||||
addr_hash_material += identity
|
||||
else:
|
||||
raise TypeError("Invalid material supplied for destination hash calculation")
|
||||
|
||||
# Create a digest for the destination
|
||||
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
|
||||
digest.update(name.encode("UTF-8"))
|
||||
|
||||
return digest.finalize()[:10]
|
||||
return RNS.Identity.full_hash(addr_hash_material)[:RNS.Reticulum.TRUNCATED_HASHLENGTH//8]
|
||||
|
||||
@staticmethod
|
||||
def app_and_aspects_from_name(full_name):
|
||||
|
@ -116,14 +122,16 @@ class Destination:
|
|||
:returns: A destination name in adressable hash form, for a full name string and Identity instance.
|
||||
"""
|
||||
app_name, aspects = Destination.app_and_aspects_from_name(full_name)
|
||||
aspects.append(identity.hexhash)
|
||||
return Destination.hash(app_name, *aspects)
|
||||
|
||||
return Destination.hash(identity, app_name, *aspects)
|
||||
|
||||
def __init__(self, identity, direction, type, app_name, *aspects):
|
||||
# Check input values and build name string
|
||||
if "." in app_name: raise ValueError("Dots can't be used in app names")
|
||||
if not type in Destination.types: raise ValueError("Unknown destination type")
|
||||
if not direction in Destination.directions: raise ValueError("Unknown destination direction")
|
||||
|
||||
self.accept_link_requests = True
|
||||
self.callbacks = Callbacks()
|
||||
self.request_handlers = {}
|
||||
self.type = type
|
||||
|
@ -131,11 +139,9 @@ class Destination:
|
|||
self.proof_strategy = Destination.PROVE_NONE
|
||||
self.mtu = 0
|
||||
|
||||
self.path_responses = {}
|
||||
self.links = []
|
||||
|
||||
if identity != None and type == Destination.SINGLE:
|
||||
aspects = aspects+(identity.hexhash,)
|
||||
|
||||
if identity == None and direction == Destination.IN and self.type != Destination.PLAIN:
|
||||
identity = RNS.Identity()
|
||||
aspects = aspects+(identity.hexhash,)
|
||||
|
@ -144,12 +150,14 @@ class Destination:
|
|||
raise TypeError("Selected destination type PLAIN cannot hold an identity")
|
||||
|
||||
self.identity = identity
|
||||
self.name = Destination.expand_name(identity, app_name, *aspects)
|
||||
|
||||
self.name = Destination.full_name(app_name, *aspects)
|
||||
self.hash = Destination.hash(app_name, *aspects)
|
||||
# Generate the destination address hash
|
||||
self.hash = Destination.hash(self.identity, app_name, *aspects)
|
||||
self.name_hash = RNS.Identity.full_hash(self.expand_name(None, app_name, *aspects).encode("utf-8"))[:(RNS.Identity.NAME_HASH_LENGTH//8)]
|
||||
self.hexhash = self.hash.hex()
|
||||
self.default_app_data = None
|
||||
|
||||
self.default_app_data = None
|
||||
self.callback = None
|
||||
self.proofcallback = None
|
||||
|
||||
|
@ -163,7 +171,7 @@ class Destination:
|
|||
return "<"+self.name+"/"+self.hexhash+">"
|
||||
|
||||
|
||||
def announce(self, app_data=None, path_response=False):
|
||||
def announce(self, app_data=None, path_response=False, attached_interface=None, tag=None, send=True):
|
||||
"""
|
||||
Creates an announce packet for this destination and broadcasts it on all
|
||||
relevant interfaces. Application specific data can be added to the announce.
|
||||
|
@ -173,36 +181,83 @@ class Destination:
|
|||
"""
|
||||
if self.type != Destination.SINGLE:
|
||||
raise TypeError("Only SINGLE destination types can be announced")
|
||||
|
||||
destination_hash = self.hash
|
||||
random_hash = RNS.Identity.get_random_hash()[0:5]+int(time.time()).to_bytes(5, "big")
|
||||
|
||||
if app_data == None and self.default_app_data != None:
|
||||
if isinstance(self.default_app_data, bytes):
|
||||
app_data = self.default_app_data
|
||||
elif callable(self.default_app_data):
|
||||
returned_app_data = self.default_app_data()
|
||||
if isinstance(returned_app_data, bytes):
|
||||
app_data = returned_app_data
|
||||
if self.direction != Destination.IN:
|
||||
raise TypeError("Only IN destination types can be announced")
|
||||
|
||||
signed_data = self.hash+self.identity.get_public_key()+random_hash
|
||||
if app_data != None:
|
||||
signed_data += app_data
|
||||
now = time.time()
|
||||
stale_responses = []
|
||||
for entry_tag in self.path_responses:
|
||||
entry = self.path_responses[entry_tag]
|
||||
if now > entry[0]+Destination.PR_TAG_WINDOW:
|
||||
stale_responses.append(entry_tag)
|
||||
|
||||
signature = self.identity.sign(signed_data)
|
||||
for entry_tag in stale_responses:
|
||||
self.path_responses.pop(entry_tag)
|
||||
|
||||
announce_data = self.identity.get_public_key()+random_hash+signature
|
||||
if (path_response == True and tag != None) and tag in self.path_responses:
|
||||
# This code is currently not used, since Transport will block duplicate
|
||||
# path requests based on tags. When multi-path support is implemented in
|
||||
# Transport, this will allow Transport to detect redundant paths to the
|
||||
# same destination, and select the best one based on chosen criteria,
|
||||
# since it will be able to detect that a single emitted announce was
|
||||
# received via multiple paths. The difference in reception time will
|
||||
# potentially also be useful in determining characteristics of the
|
||||
# multiple available paths, and to choose the best one.
|
||||
RNS.log("Using cached announce data for answering path request with tag "+RNS.prettyhexrep(tag), RNS.LOG_EXTREME)
|
||||
announce_data = self.path_responses[tag][1]
|
||||
|
||||
else:
|
||||
destination_hash = self.hash
|
||||
random_hash = RNS.Identity.get_random_hash()[0:5]+int(time.time()).to_bytes(5, "big")
|
||||
|
||||
if app_data != None:
|
||||
announce_data += app_data
|
||||
if app_data == None and self.default_app_data != None:
|
||||
if isinstance(self.default_app_data, bytes):
|
||||
app_data = self.default_app_data
|
||||
elif callable(self.default_app_data):
|
||||
returned_app_data = self.default_app_data()
|
||||
if isinstance(returned_app_data, bytes):
|
||||
app_data = returned_app_data
|
||||
|
||||
signed_data = self.hash+self.identity.get_public_key()+self.name_hash+random_hash
|
||||
if app_data != None:
|
||||
signed_data += app_data
|
||||
|
||||
signature = self.identity.sign(signed_data)
|
||||
|
||||
announce_data = self.identity.get_public_key()+self.name_hash+random_hash+signature
|
||||
|
||||
if app_data != None:
|
||||
announce_data += app_data
|
||||
|
||||
self.path_responses[tag] = [time.time(), announce_data]
|
||||
|
||||
if path_response:
|
||||
announce_context = RNS.Packet.PATH_RESPONSE
|
||||
else:
|
||||
announce_context = RNS.Packet.NONE
|
||||
|
||||
RNS.Packet(self, announce_data, RNS.Packet.ANNOUNCE, context = announce_context).send()
|
||||
announce_packet = RNS.Packet(self, announce_data, RNS.Packet.ANNOUNCE, context = announce_context, attached_interface = attached_interface)
|
||||
|
||||
if send:
|
||||
announce_packet.send()
|
||||
else:
|
||||
return announce_packet
|
||||
|
||||
def accepts_links(self, accepts = None):
|
||||
"""
|
||||
Set or query whether the destination accepts incoming link requests.
|
||||
|
||||
:param accepts: If ``True`` or ``False``, this method sets whether the destination accepts incoming link requests. If not provided or ``None``, the method returns whether the destination currently accepts link requests.
|
||||
:returns: ``True`` or ``False`` depending on whether the destination accepts incoming link requests, if the *accepts* parameter is not provided or ``None``.
|
||||
"""
|
||||
if accepts == None:
|
||||
return self.accept_link_requests
|
||||
|
||||
if accepts:
|
||||
self.accept_link_requests = True
|
||||
else:
|
||||
self.accept_link_requests = False
|
||||
|
||||
def set_link_established_callback(self, callback):
|
||||
"""
|
||||
|
@ -249,7 +304,7 @@ class Destination:
|
|||
Registers a request handler.
|
||||
|
||||
:param path: The path for the request handler to be registered.
|
||||
:param response_generator: A function or method with the signature *response_generator(path, data, request_id, remote_identity, requested_at)* to be called. Whatever this funcion returns will be sent as a response to the requester. If the function returns ``None``, no response will be sent.
|
||||
:param response_generator: A function or method with the signature *response_generator(path, data, request_id, link_id, remote_identity, requested_at)* to be called. Whatever this funcion returns will be sent as a response to the requester. If the function returns ``None``, no response will be sent.
|
||||
:param allow: One of ``RNS.Destination.ALLOW_NONE``, ``RNS.Destination.ALLOW_ALL`` or ``RNS.Destination.ALLOW_LIST``. If ``RNS.Destination.ALLOW_LIST`` is set, the request handler will only respond to requests for identified peers in the supplied list.
|
||||
:param allowed_list: A list of *bytes-like* :ref:`RNS.Identity<api-identity>` hashes.
|
||||
:raises: ``ValueError`` if any of the supplied arguments are invalid.
|
||||
|
@ -298,9 +353,10 @@ class Destination:
|
|||
|
||||
|
||||
def incoming_link_request(self, data, packet):
|
||||
link = RNS.Link.validate_request(self, data, packet)
|
||||
if link != None:
|
||||
self.links.append(link)
|
||||
if self.accept_link_requests:
|
||||
link = RNS.Link.validate_request(self, data, packet)
|
||||
if link != None:
|
||||
self.links.append(link)
|
||||
|
||||
def create_keys(self):
|
||||
"""
|
||||
|
@ -315,8 +371,8 @@ class Destination:
|
|||
raise TypeError("A single destination holds keys through an Identity instance")
|
||||
|
||||
if self.type == Destination.GROUP:
|
||||
self.prv_bytes = base64.urlsafe_b64decode(Fernet.generate_key())
|
||||
self.prv = Fernet(base64.urlsafe_b64encode(self.prv_bytes))
|
||||
self.prv_bytes = Fernet.generate_key()
|
||||
self.prv = Fernet(self.prv_bytes)
|
||||
|
||||
|
||||
def get_private_key(self):
|
||||
|
@ -348,7 +404,7 @@ class Destination:
|
|||
|
||||
if self.type == Destination.GROUP:
|
||||
self.prv_bytes = key
|
||||
self.prv = Fernet(base64.urlsafe_b64encode(self.prv_bytes))
|
||||
self.prv = Fernet(self.prv_bytes)
|
||||
|
||||
def load_public_key(self, key):
|
||||
if self.type != Destination.SINGLE:
|
||||
|
@ -373,7 +429,7 @@ class Destination:
|
|||
if self.type == Destination.GROUP:
|
||||
if hasattr(self, "prv") and self.prv != None:
|
||||
try:
|
||||
return base64.urlsafe_b64decode(self.prv.encrypt(plaintext))
|
||||
return self.prv.encrypt(plaintext)
|
||||
except Exception as e:
|
||||
RNS.log("The GROUP destination could not encrypt data", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
@ -398,7 +454,7 @@ class Destination:
|
|||
if self.type == Destination.GROUP:
|
||||
if hasattr(self, "prv") and self.prv != None:
|
||||
try:
|
||||
return self.prv.decrypt(base64.urlsafe_b64encode(ciphertext))
|
||||
return self.prv.decrypt(ciphertext)
|
||||
except Exception as e:
|
||||
RNS.log("The GROUP destination could not decrypt data", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
|
229
RNS/Identity.py
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -20,23 +20,18 @@
|
|||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import base64
|
||||
import math
|
||||
import os
|
||||
import RNS
|
||||
import time
|
||||
import atexit
|
||||
import base64
|
||||
from .vendor import umsgpack as umsgpack
|
||||
from cryptography.hazmat.backends import default_backend
|
||||
from cryptography.hazmat.primitives import hashes
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey, Ed25519PublicKey
|
||||
from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey, X25519PublicKey
|
||||
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
|
||||
from cryptography.fernet import Fernet
|
||||
import hashlib
|
||||
|
||||
from .vendor import umsgpack as umsgpack
|
||||
|
||||
from RNS.Cryptography import X25519PrivateKey, X25519PublicKey, Ed25519PrivateKey, Ed25519PublicKey
|
||||
from RNS.Cryptography import Fernet
|
||||
|
||||
cio_default_backend = default_backend()
|
||||
|
||||
class Identity:
|
||||
"""
|
||||
|
@ -58,13 +53,12 @@ class Identity:
|
|||
"""
|
||||
|
||||
# Non-configurable constants
|
||||
FERNET_VERSION = 0x80
|
||||
FERNET_OVERHEAD = 57 # In bytes
|
||||
OPTIMISED_FERNET_OVERHEAD = 54 # In bytes
|
||||
FERNET_OVERHEAD = RNS.Cryptography.Fernet.FERNET_OVERHEAD
|
||||
AES128_BLOCKSIZE = 16 # In bytes
|
||||
HASHLENGTH = 256 # In bits
|
||||
SIGLENGTH = KEYSIZE # In bits
|
||||
|
||||
NAME_HASH_LENGTH = 80
|
||||
TRUNCATED_HASHLENGTH = RNS.Reticulum.TRUNCATED_HASHLENGTH
|
||||
"""
|
||||
Constant specifying the truncated hash length (in bits) used by Reticulum
|
||||
|
@ -97,6 +91,13 @@ class Identity:
|
|||
identity.app_data = identity_data[3]
|
||||
return identity
|
||||
else:
|
||||
for registered_destination in RNS.Transport.destinations:
|
||||
if destination_hash == registered_destination.hash:
|
||||
identity = Identity(create_keys=False)
|
||||
identity.load_public_key(registered_destination.identity.get_public_key())
|
||||
identity.app_data = None
|
||||
return identity
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
|
@ -115,7 +116,26 @@ class Identity:
|
|||
|
||||
@staticmethod
|
||||
def save_known_destinations():
|
||||
# TODO: Improve the storage method so we don't have to
|
||||
# deserialize and serialize the entire table on every
|
||||
# save, but the only changes. It might be possible to
|
||||
# simply overwrite on exit now that every local client
|
||||
# disconnect triggers a data persist.
|
||||
|
||||
try:
|
||||
if hasattr(Identity, "saving_known_destinations"):
|
||||
wait_interval = 0.2
|
||||
wait_timeout = 5
|
||||
wait_start = time.time()
|
||||
while Identity.saving_known_destinations:
|
||||
time.sleep(wait_interval)
|
||||
if time.time() > wait_start+wait_timeout:
|
||||
RNS.log("Could not save known destinations to storage, waiting for previous save operation timed out.", RNS.LOG_ERROR)
|
||||
return False
|
||||
|
||||
Identity.saving_known_destinations = True
|
||||
save_start = time.time()
|
||||
|
||||
storage_known_destinations = {}
|
||||
if os.path.isfile(RNS.Reticulum.storagepath+"/known_destinations"):
|
||||
try:
|
||||
|
@ -125,27 +145,48 @@ class Identity:
|
|||
except:
|
||||
pass
|
||||
|
||||
for destination_hash in storage_known_destinations:
|
||||
if not destination_hash in Identity.known_destinations:
|
||||
Identity.known_destinations[destination_hash] = storage_known_destinations[destination_hash]
|
||||
try:
|
||||
for destination_hash in storage_known_destinations:
|
||||
if not destination_hash in Identity.known_destinations:
|
||||
Identity.known_destinations[destination_hash] = storage_known_destinations[destination_hash]
|
||||
except Exception as e:
|
||||
RNS.log("Skipped recombining known destinations from disk, since an error occurred: "+str(e), RNS.LOG_WARNING)
|
||||
|
||||
RNS.log("Saving known destinations to storage...", RNS.LOG_VERBOSE)
|
||||
RNS.log("Saving "+str(len(Identity.known_destinations))+" known destinations to storage...", RNS.LOG_DEBUG)
|
||||
file = open(RNS.Reticulum.storagepath+"/known_destinations","wb")
|
||||
umsgpack.dump(Identity.known_destinations, file)
|
||||
file.close()
|
||||
RNS.log("Done saving known destinations to storage", RNS.LOG_VERBOSE)
|
||||
|
||||
save_time = time.time() - save_start
|
||||
if save_time < 1:
|
||||
time_str = str(round(save_time*1000,2))+"ms"
|
||||
else:
|
||||
time_str = str(round(save_time,2))+"s"
|
||||
|
||||
RNS.log("Saved known destinations to storage in "+time_str, RNS.LOG_DEBUG)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Error while saving known destinations to disk, the contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
RNS.trace_exception(e)
|
||||
|
||||
Identity.saving_known_destinations = False
|
||||
|
||||
@staticmethod
|
||||
def load_known_destinations():
|
||||
if os.path.isfile(RNS.Reticulum.storagepath+"/known_destinations"):
|
||||
try:
|
||||
file = open(RNS.Reticulum.storagepath+"/known_destinations","rb")
|
||||
Identity.known_destinations = umsgpack.load(file)
|
||||
loaded_known_destinations = umsgpack.load(file)
|
||||
file.close()
|
||||
|
||||
Identity.known_destinations = {}
|
||||
for known_destination in loaded_known_destinations:
|
||||
if len(known_destination) == RNS.Reticulum.TRUNCATED_HASHLENGTH//8:
|
||||
Identity.known_destinations[known_destination] = loaded_known_destinations[known_destination]
|
||||
|
||||
RNS.log("Loaded "+str(len(Identity.known_destinations))+" known destination from storage", RNS.LOG_VERBOSE)
|
||||
except:
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Error loading known destinations from disk, file will be recreated on exit", RNS.LOG_ERROR)
|
||||
else:
|
||||
RNS.log("Destinations file does not exist, no known destinations loaded", RNS.LOG_VERBOSE)
|
||||
|
@ -158,10 +199,7 @@ class Identity:
|
|||
:param data: Data to be hashed as *bytes*.
|
||||
:returns: SHA-256 hash as *bytes*
|
||||
"""
|
||||
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
|
||||
digest.update(data)
|
||||
|
||||
return digest.finalize()
|
||||
return RNS.Cryptography.sha256(data)
|
||||
|
||||
@staticmethod
|
||||
def truncated_hash(data):
|
||||
|
@ -184,38 +222,73 @@ class Identity:
|
|||
return Identity.truncated_hash(os.urandom(Identity.TRUNCATED_HASHLENGTH//8))
|
||||
|
||||
@staticmethod
|
||||
def validate_announce(packet):
|
||||
def validate_announce(packet, only_validate_signature=False):
|
||||
try:
|
||||
if packet.packet_type == RNS.Packet.ANNOUNCE:
|
||||
destination_hash = packet.destination_hash
|
||||
public_key = packet.data[:Identity.KEYSIZE//8]
|
||||
random_hash = packet.data[Identity.KEYSIZE//8:Identity.KEYSIZE//8+10]
|
||||
signature = packet.data[Identity.KEYSIZE//8+10:Identity.KEYSIZE//8+10+Identity.KEYSIZE//8]
|
||||
name_hash = packet.data[Identity.KEYSIZE//8:Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8]
|
||||
random_hash = packet.data[Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8:Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10]
|
||||
signature = packet.data[Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10:Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8]
|
||||
app_data = b""
|
||||
if len(packet.data) > Identity.KEYSIZE//8+10+Identity.KEYSIZE//8:
|
||||
app_data = packet.data[Identity.KEYSIZE//8+10+Identity.KEYSIZE//8:]
|
||||
if len(packet.data) > Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8:
|
||||
app_data = packet.data[Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8:]
|
||||
|
||||
signed_data = destination_hash+public_key+random_hash+app_data
|
||||
signed_data = destination_hash+public_key+name_hash+random_hash+app_data
|
||||
|
||||
if not len(packet.data) > Identity.KEYSIZE//8+10+Identity.KEYSIZE//8:
|
||||
if not len(packet.data) > Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8:
|
||||
app_data = None
|
||||
|
||||
announced_identity = Identity(create_keys=False)
|
||||
announced_identity.load_public_key(public_key)
|
||||
|
||||
if announced_identity.pub != None and announced_identity.validate(signature, signed_data):
|
||||
RNS.Identity.remember(packet.get_hash(), destination_hash, public_key, app_data)
|
||||
del announced_identity
|
||||
if only_validate_signature:
|
||||
del announced_identity
|
||||
return True
|
||||
|
||||
hash_material = name_hash+announced_identity.hash
|
||||
expected_hash = RNS.Identity.full_hash(hash_material)[:RNS.Reticulum.TRUNCATED_HASHLENGTH//8]
|
||||
|
||||
if destination_hash == expected_hash:
|
||||
# Check if we already have a public key for this destination
|
||||
# and make sure the public key is not different.
|
||||
if destination_hash in Identity.known_destinations:
|
||||
if public_key != Identity.known_destinations[destination_hash][2]:
|
||||
# In reality, this should never occur, but in the odd case
|
||||
# that someone manages a hash collision, we reject the announce.
|
||||
RNS.log("Received announce with valid signature and destination hash, but announced public key does not match already known public key.", RNS.LOG_CRITICAL)
|
||||
RNS.log("This may indicate an attempt to modify network paths, or a random hash collision. The announce was rejected.", RNS.LOG_CRITICAL)
|
||||
return False
|
||||
|
||||
RNS.Identity.remember(packet.get_hash(), destination_hash, public_key, app_data)
|
||||
del announced_identity
|
||||
|
||||
if packet.rssi != None or packet.snr != None:
|
||||
signal_str = " ["
|
||||
if packet.rssi != None:
|
||||
signal_str += "RSSI "+str(packet.rssi)+"dBm"
|
||||
if packet.snr != None:
|
||||
signal_str += ", "
|
||||
if packet.snr != None:
|
||||
signal_str += "SNR "+str(packet.snr)+"dB"
|
||||
signal_str += "]"
|
||||
else:
|
||||
signal_str = ""
|
||||
|
||||
if hasattr(packet, "transport_id") and packet.transport_id != None:
|
||||
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received via "+RNS.prettyhexrep(packet.transport_id)+" on "+str(packet.receiving_interface)+signal_str, RNS.LOG_EXTREME)
|
||||
else:
|
||||
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received on "+str(packet.receiving_interface)+signal_str, RNS.LOG_EXTREME)
|
||||
|
||||
return True
|
||||
|
||||
if hasattr(packet, "transport_id") and packet.transport_id != None:
|
||||
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received via "+RNS.prettyhexrep(packet.transport_id)+" on "+str(packet.receiving_interface), RNS.LOG_EXTREME)
|
||||
else:
|
||||
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received on "+str(packet.receiving_interface), RNS.LOG_EXTREME)
|
||||
|
||||
return True
|
||||
RNS.log("Received invalid announce for "+RNS.prettyhexrep(destination_hash)+": Destination mismatch.", RNS.LOG_DEBUG)
|
||||
return False
|
||||
|
||||
else:
|
||||
RNS.log("Received invalid announce for "+RNS.prettyhexrep(destination_hash), RNS.LOG_DEBUG)
|
||||
RNS.log("Received invalid announce for "+RNS.prettyhexrep(destination_hash)+": Invalid signature.", RNS.LOG_DEBUG)
|
||||
del announced_identity
|
||||
return False
|
||||
|
||||
|
@ -223,9 +296,14 @@ class Identity:
|
|||
RNS.log("Error occurred while validating announce. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def persist_data():
|
||||
if not RNS.Transport.owner.is_connected_to_shared_instance:
|
||||
Identity.save_known_destinations()
|
||||
|
||||
@staticmethod
|
||||
def exit_handler():
|
||||
Identity.save_known_destinations()
|
||||
Identity.persist_data()
|
||||
|
||||
|
||||
@staticmethod
|
||||
|
@ -297,30 +375,16 @@ class Identity:
|
|||
|
||||
def create_keys(self):
|
||||
self.prv = X25519PrivateKey.generate()
|
||||
self.prv_bytes = self.prv.private_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PrivateFormat.Raw,
|
||||
encryption_algorithm=serialization.NoEncryption()
|
||||
)
|
||||
self.prv_bytes = self.prv.private_bytes()
|
||||
|
||||
self.sig_prv = Ed25519PrivateKey.generate()
|
||||
self.sig_prv_bytes = self.sig_prv.private_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PrivateFormat.Raw,
|
||||
encryption_algorithm=serialization.NoEncryption()
|
||||
)
|
||||
self.sig_prv_bytes = self.sig_prv.private_bytes()
|
||||
|
||||
self.pub = self.prv.public_key()
|
||||
self.pub_bytes = self.pub.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
self.pub_bytes = self.pub.public_bytes()
|
||||
|
||||
self.sig_pub = self.sig_prv.public_key()
|
||||
self.sig_pub_bytes = self.sig_pub.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
self.sig_pub_bytes = self.sig_pub.public_bytes()
|
||||
|
||||
self.update_hashes()
|
||||
|
||||
|
@ -352,16 +416,10 @@ class Identity:
|
|||
self.sig_prv = Ed25519PrivateKey.from_private_bytes(self.sig_prv_bytes)
|
||||
|
||||
self.pub = self.prv.public_key()
|
||||
self.pub_bytes = self.pub.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
self.pub_bytes = self.pub.public_bytes()
|
||||
|
||||
self.sig_pub = self.sig_prv.public_key()
|
||||
self.sig_pub_bytes = self.sig_pub.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
self.sig_pub_bytes = self.sig_pub.public_bytes()
|
||||
|
||||
self.update_hashes()
|
||||
|
||||
|
@ -403,7 +461,7 @@ class Identity:
|
|||
return False
|
||||
except Exception as e:
|
||||
RNS.log("Error while loading identity from "+str(path), RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e))
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
def get_salt(self):
|
||||
return self.hash
|
||||
|
@ -421,24 +479,19 @@ class Identity:
|
|||
"""
|
||||
if self.pub != None:
|
||||
ephemeral_key = X25519PrivateKey.generate()
|
||||
ephemeral_pub_bytes = ephemeral_key.public_key().public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
ephemeral_pub_bytes = ephemeral_key.public_key().public_bytes()
|
||||
|
||||
shared_key = ephemeral_key.exchange(self.pub)
|
||||
|
||||
# TODO: Improve this re-allocation of HKDF
|
||||
derived_key = HKDF(
|
||||
algorithm=hashes.SHA256(),
|
||||
derived_key = RNS.Cryptography.hkdf(
|
||||
length=32,
|
||||
derive_from=shared_key,
|
||||
salt=self.get_salt(),
|
||||
info=self.get_context(),
|
||||
backend=cio_default_backend,
|
||||
).derive(shared_key)
|
||||
context=self.get_context(),
|
||||
)
|
||||
|
||||
fernet = Fernet(base64.urlsafe_b64encode(derived_key))
|
||||
ciphertext = base64.urlsafe_b64decode(fernet.encrypt(plaintext))
|
||||
fernet = Fernet(derived_key)
|
||||
ciphertext = fernet.encrypt(plaintext)
|
||||
token = ephemeral_pub_bytes+ciphertext
|
||||
|
||||
return token
|
||||
|
@ -463,18 +516,16 @@ class Identity:
|
|||
|
||||
shared_key = self.prv.exchange(peer_pub)
|
||||
|
||||
# TODO: Improve this re-allocation of HKDF
|
||||
derived_key = HKDF(
|
||||
algorithm=hashes.SHA256(),
|
||||
derived_key = RNS.Cryptography.hkdf(
|
||||
length=32,
|
||||
derive_from=shared_key,
|
||||
salt=self.get_salt(),
|
||||
info=self.get_context(),
|
||||
backend=cio_default_backend,
|
||||
).derive(shared_key)
|
||||
context=self.get_context(),
|
||||
)
|
||||
|
||||
fernet = Fernet(base64.urlsafe_b64encode(derived_key))
|
||||
fernet = Fernet(derived_key)
|
||||
ciphertext = ciphertext_token[Identity.KEYSIZE//8//2:]
|
||||
plaintext = fernet.decrypt(base64.urlsafe_b64encode(ciphertext))
|
||||
plaintext = fernet.decrypt(ciphertext)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Decryption by "+RNS.prettyhexrep(self.hash)+" failed: "+str(e), RNS.LOG_DEBUG)
|
||||
|
|
|
@ -77,8 +77,7 @@ class AX25KISSInterface(Interface):
|
|||
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 564
|
||||
|
||||
|
@ -153,7 +152,7 @@ class AX25KISSInterface(Interface):
|
|||
# Allow time for interface to initialise before config
|
||||
sleep(2.0)
|
||||
thread = threading.Thread(target=self.readLoop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
self.online = True
|
||||
RNS.log("Serial port "+self.port+" is now open")
|
||||
|
@ -367,5 +366,8 @@ class AX25KISSInterface(Interface):
|
|||
|
||||
RNS.log("Reconnected serial port for "+str(self))
|
||||
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "AX25KISSInterface["+self.name+"]"
|
|
@ -0,0 +1,409 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from RNS.Interfaces.Interface import Interface
|
||||
from time import sleep
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import RNS
|
||||
|
||||
class KISS():
|
||||
FEND = 0xC0
|
||||
FESC = 0xDB
|
||||
TFEND = 0xDC
|
||||
TFESC = 0xDD
|
||||
CMD_UNKNOWN = 0xFE
|
||||
CMD_DATA = 0x00
|
||||
CMD_TXDELAY = 0x01
|
||||
CMD_P = 0x02
|
||||
CMD_SLOTTIME = 0x03
|
||||
CMD_TXTAIL = 0x04
|
||||
CMD_FULLDUPLEX = 0x05
|
||||
CMD_SETHARDWARE = 0x06
|
||||
CMD_READY = 0x0F
|
||||
CMD_RETURN = 0xFF
|
||||
|
||||
@staticmethod
|
||||
def escape(data):
|
||||
data = data.replace(bytes([0xdb]), bytes([0xdb, 0xdd]))
|
||||
data = data.replace(bytes([0xc0]), bytes([0xdb, 0xdc]))
|
||||
return data
|
||||
|
||||
class KISSInterface(Interface):
|
||||
MAX_CHUNK = 32768
|
||||
BITRATE_GUESS = 1200
|
||||
|
||||
owner = None
|
||||
port = None
|
||||
speed = None
|
||||
databits = None
|
||||
parity = None
|
||||
stopbits = None
|
||||
serial = None
|
||||
|
||||
def __init__(self, owner, name, port, speed, databits, parity, stopbits, preamble, txtail, persistence, slottime, flow_control, beacon_interval, beacon_data):
|
||||
import importlib
|
||||
if RNS.vendor.platformutils.is_android():
|
||||
self.on_android = True
|
||||
if importlib.util.find_spec('usbserial4a') != None:
|
||||
if importlib.util.find_spec('jnius') == None:
|
||||
RNS.log("Could not load jnius API wrapper for Android, KISS interface cannot be created.", RNS.LOG_CRITICAL)
|
||||
RNS.log("This probably means you are trying to use an USB-based interface from within Termux or similar.", RNS.LOG_CRITICAL)
|
||||
RNS.log("This is currently not possible, due to this environment limiting access to the native Android APIs.", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
|
||||
from usbserial4a import serial4a as serial
|
||||
self.parity = "N"
|
||||
|
||||
else:
|
||||
RNS.log("Could not load USB serial module for Android, KISS interface cannot be created.", RNS.LOG_CRITICAL)
|
||||
RNS.log("You can install this module by issuing: pip install usbserial4a", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
else:
|
||||
raise SystemError("Android-specific interface was used on non-Android OS")
|
||||
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 564
|
||||
|
||||
if beacon_data == None:
|
||||
beacon_data = ""
|
||||
|
||||
self.pyserial = serial
|
||||
self.serial = None
|
||||
self.owner = owner
|
||||
self.name = name
|
||||
self.port = port
|
||||
self.speed = speed
|
||||
self.databits = databits
|
||||
self.parity = "N"
|
||||
self.stopbits = stopbits
|
||||
self.timeout = 100
|
||||
self.online = False
|
||||
self.beacon_i = beacon_interval
|
||||
self.beacon_d = beacon_data.encode("utf-8")
|
||||
self.first_tx = None
|
||||
self.bitrate = KISSInterface.BITRATE_GUESS
|
||||
|
||||
self.packet_queue = []
|
||||
self.flow_control = flow_control
|
||||
self.interface_ready = False
|
||||
self.flow_control_timeout = 5
|
||||
self.flow_control_locked = time.time()
|
||||
|
||||
self.preamble = preamble if preamble != None else 350;
|
||||
self.txtail = txtail if txtail != None else 20;
|
||||
self.persistence = persistence if persistence != None else 64;
|
||||
self.slottime = slottime if slottime != None else 20;
|
||||
|
||||
if parity.lower() == "e" or parity.lower() == "even":
|
||||
self.parity = "E"
|
||||
|
||||
if parity.lower() == "o" or parity.lower() == "odd":
|
||||
self.parity = "O"
|
||||
|
||||
try:
|
||||
self.open_port()
|
||||
except Exception as e:
|
||||
RNS.log("Could not open serial port "+self.port, RNS.LOG_ERROR)
|
||||
raise e
|
||||
|
||||
if self.serial.is_open:
|
||||
self.configure_device()
|
||||
else:
|
||||
raise IOError("Could not open serial port")
|
||||
|
||||
|
||||
def open_port(self):
|
||||
RNS.log("Opening serial port "+self.port+"...")
|
||||
# Get device parameters
|
||||
from usb4a import usb
|
||||
device = usb.get_usb_device(self.port)
|
||||
if device:
|
||||
vid = device.getVendorId()
|
||||
pid = device.getProductId()
|
||||
|
||||
# Driver overrides for speficic chips
|
||||
proxy = self.pyserial.get_serial_port
|
||||
if vid == 0x1A86 and pid == 0x55D4:
|
||||
# Force CDC driver for Qinheng CH34x
|
||||
RNS.log(str(self)+" using CDC driver for "+RNS.hexrep(vid)+":"+RNS.hexrep(pid), RNS.LOG_DEBUG)
|
||||
from usbserial4a.cdcacmserial4a import CdcAcmSerial
|
||||
proxy = CdcAcmSerial
|
||||
|
||||
self.serial = proxy(
|
||||
self.port,
|
||||
baudrate = self.speed,
|
||||
bytesize = self.databits,
|
||||
parity = self.parity,
|
||||
stopbits = self.stopbits,
|
||||
xonxoff = False,
|
||||
rtscts = False,
|
||||
timeout = None,
|
||||
inter_byte_timeout = None,
|
||||
# write_timeout = wtimeout,
|
||||
dsrdtr = False,
|
||||
)
|
||||
|
||||
if vid == 0x0403:
|
||||
# Hardware parameters for FTDI devices @ 115200 baud
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 16 * 1024
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 100
|
||||
self.serial.timeout = 0.1
|
||||
elif vid == 0x10C4:
|
||||
# Hardware parameters for SiLabs CP210x @ 115200 baud
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 12
|
||||
self.serial.timeout = 0.012
|
||||
elif vid == 0x1A86 and pid == 0x55D4:
|
||||
# Hardware parameters for Qinheng CH34x @ 115200 baud
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 12
|
||||
self.serial.timeout = 0.1
|
||||
else:
|
||||
# Default values
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 1 * 1024
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 100
|
||||
self.serial.timeout = 0.1
|
||||
|
||||
RNS.log(str(self)+" USB read buffer size set to "+RNS.prettysize(self.serial.DEFAULT_READ_BUFFER_SIZE), RNS.LOG_DEBUG)
|
||||
RNS.log(str(self)+" USB read timeout set to "+str(self.serial.USB_READ_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
|
||||
RNS.log(str(self)+" USB write timeout set to "+str(self.serial.USB_WRITE_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
|
||||
|
||||
def configure_device(self):
|
||||
# Allow time for interface to initialise before config
|
||||
sleep(2.0)
|
||||
thread = threading.Thread(target=self.readLoop)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
self.online = True
|
||||
RNS.log("Serial port "+self.port+" is now open")
|
||||
RNS.log("Configuring KISS interface parameters...")
|
||||
self.setPreamble(self.preamble)
|
||||
self.setTxTail(self.txtail)
|
||||
self.setPersistence(self.persistence)
|
||||
self.setSlotTime(self.slottime)
|
||||
self.setFlowControl(self.flow_control)
|
||||
self.interface_ready = True
|
||||
RNS.log("KISS interface configured")
|
||||
|
||||
def setPreamble(self, preamble):
|
||||
preamble_ms = preamble
|
||||
preamble = int(preamble_ms / 10)
|
||||
if preamble < 0:
|
||||
preamble = 0
|
||||
if preamble > 255:
|
||||
preamble = 255
|
||||
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_TXDELAY])+bytes([preamble])+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("Could not configure KISS interface preamble to "+str(preamble_ms)+" (command value "+str(preamble)+")")
|
||||
|
||||
def setTxTail(self, txtail):
|
||||
txtail_ms = txtail
|
||||
txtail = int(txtail_ms / 10)
|
||||
if txtail < 0:
|
||||
txtail = 0
|
||||
if txtail > 255:
|
||||
txtail = 255
|
||||
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_TXTAIL])+bytes([txtail])+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("Could not configure KISS interface TX tail to "+str(txtail_ms)+" (command value "+str(txtail)+")")
|
||||
|
||||
def setPersistence(self, persistence):
|
||||
if persistence < 0:
|
||||
persistence = 0
|
||||
if persistence > 255:
|
||||
persistence = 255
|
||||
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_P])+bytes([persistence])+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("Could not configure KISS interface persistence to "+str(persistence))
|
||||
|
||||
def setSlotTime(self, slottime):
|
||||
slottime_ms = slottime
|
||||
slottime = int(slottime_ms / 10)
|
||||
if slottime < 0:
|
||||
slottime = 0
|
||||
if slottime > 255:
|
||||
slottime = 255
|
||||
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_SLOTTIME])+bytes([slottime])+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("Could not configure KISS interface slot time to "+str(slottime_ms)+" (command value "+str(slottime)+")")
|
||||
|
||||
def setFlowControl(self, flow_control):
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_READY])+bytes([0x01])+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
if (flow_control):
|
||||
raise IOError("Could not enable KISS interface flow control")
|
||||
else:
|
||||
raise IOError("Could not enable KISS interface flow control")
|
||||
|
||||
|
||||
def processIncoming(self, data):
|
||||
self.rxb += len(data)
|
||||
def af():
|
||||
self.owner.inbound(data, self)
|
||||
threading.Thread(target=af, daemon=True).start()
|
||||
|
||||
def processOutgoing(self,data):
|
||||
datalen = len(data)
|
||||
if self.online:
|
||||
if self.interface_ready:
|
||||
if self.flow_control:
|
||||
self.interface_ready = False
|
||||
self.flow_control_locked = time.time()
|
||||
|
||||
data = data.replace(bytes([0xdb]), bytes([0xdb])+bytes([0xdd]))
|
||||
data = data.replace(bytes([0xc0]), bytes([0xdb])+bytes([0xdc]))
|
||||
frame = bytes([KISS.FEND])+bytes([0x00])+data+bytes([KISS.FEND])
|
||||
|
||||
written = self.serial.write(frame)
|
||||
self.txb += datalen
|
||||
|
||||
if data == self.beacon_d:
|
||||
self.first_tx = None
|
||||
else:
|
||||
if self.first_tx == None:
|
||||
self.first_tx = time.time()
|
||||
|
||||
if written != len(frame):
|
||||
raise IOError("Serial interface only wrote "+str(written)+" bytes of "+str(len(data)))
|
||||
|
||||
else:
|
||||
self.queue(data)
|
||||
|
||||
def queue(self, data):
|
||||
self.packet_queue.append(data)
|
||||
|
||||
def process_queue(self):
|
||||
if len(self.packet_queue) > 0:
|
||||
data = self.packet_queue.pop(0)
|
||||
self.interface_ready = True
|
||||
self.processOutgoing(data)
|
||||
elif len(self.packet_queue) == 0:
|
||||
self.interface_ready = True
|
||||
|
||||
def readLoop(self):
|
||||
try:
|
||||
in_frame = False
|
||||
escape = False
|
||||
command = KISS.CMD_UNKNOWN
|
||||
data_buffer = b""
|
||||
last_read_ms = int(time.time()*1000)
|
||||
|
||||
while self.serial.is_open:
|
||||
serial_bytes = self.serial.read()
|
||||
got = len(serial_bytes)
|
||||
|
||||
for byte in serial_bytes:
|
||||
last_read_ms = int(time.time()*1000)
|
||||
|
||||
if (in_frame and byte == KISS.FEND and command == KISS.CMD_DATA):
|
||||
in_frame = False
|
||||
self.processIncoming(data_buffer)
|
||||
elif (byte == KISS.FEND):
|
||||
in_frame = True
|
||||
command = KISS.CMD_UNKNOWN
|
||||
data_buffer = b""
|
||||
elif (in_frame and len(data_buffer) < self.HW_MTU):
|
||||
if (len(data_buffer) == 0 and command == KISS.CMD_UNKNOWN):
|
||||
# We only support one HDLC port for now, so
|
||||
# strip off the port nibble
|
||||
byte = byte & 0x0F
|
||||
command = byte
|
||||
elif (command == KISS.CMD_DATA):
|
||||
if (byte == KISS.FESC):
|
||||
escape = True
|
||||
else:
|
||||
if (escape):
|
||||
if (byte == KISS.TFEND):
|
||||
byte = KISS.FEND
|
||||
if (byte == KISS.TFESC):
|
||||
byte = KISS.FESC
|
||||
escape = False
|
||||
data_buffer = data_buffer+bytes([byte])
|
||||
elif (command == KISS.CMD_READY):
|
||||
self.process_queue()
|
||||
|
||||
if got == 0:
|
||||
time_since_last = int(time.time()*1000) - last_read_ms
|
||||
if len(data_buffer) > 0 and time_since_last > self.timeout:
|
||||
data_buffer = b""
|
||||
in_frame = False
|
||||
command = KISS.CMD_UNKNOWN
|
||||
escape = False
|
||||
sleep(0.05)
|
||||
|
||||
if self.flow_control:
|
||||
if not self.interface_ready:
|
||||
if time.time() > self.flow_control_locked + self.flow_control_timeout:
|
||||
RNS.log("Interface "+str(self)+" is unlocking flow control due to time-out. This should not happen. Your hardware might have missed a flow-control READY command, or maybe it does not support flow-control.", RNS.LOG_WARNING)
|
||||
self.process_queue()
|
||||
|
||||
if self.beacon_i != None and self.beacon_d != None:
|
||||
if self.first_tx != None:
|
||||
if time.time() > self.first_tx + self.beacon_i:
|
||||
RNS.log("Interface "+str(self)+" is transmitting beacon data: "+str(self.beacon_d.decode("utf-8")), RNS.LOG_DEBUG)
|
||||
self.first_tx = None
|
||||
self.processOutgoing(self.beacon_d)
|
||||
|
||||
except Exception as e:
|
||||
self.online = False
|
||||
RNS.log("A serial port error occurred, the contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
RNS.log("The interface "+str(self)+" experienced an unrecoverable error and is now offline.", RNS.LOG_ERROR)
|
||||
|
||||
if RNS.Reticulum.panic_on_interface_error:
|
||||
RNS.panic()
|
||||
|
||||
RNS.log("Reticulum will attempt to reconnect the interface periodically.", RNS.LOG_ERROR)
|
||||
|
||||
self.online = False
|
||||
self.serial.close()
|
||||
self.reconnect_port()
|
||||
|
||||
def reconnect_port(self):
|
||||
while not self.online:
|
||||
try:
|
||||
time.sleep(5)
|
||||
RNS.log("Attempting to reconnect serial port "+str(self.port)+" for "+str(self)+"...", RNS.LOG_VERBOSE)
|
||||
self.open_port()
|
||||
if self.serial.is_open:
|
||||
self.configure_device()
|
||||
except Exception as e:
|
||||
RNS.log("Error while reconnecting port, the contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
RNS.log("Reconnected serial port for "+str(self))
|
||||
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "KISSInterface["+self.name+"]"
|
|
@ -0,0 +1,260 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from RNS.Interfaces.Interface import Interface
|
||||
from time import sleep
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import RNS
|
||||
|
||||
class HDLC():
|
||||
# The Serial Interface packetizes data using
|
||||
# simplified HDLC framing, similar to PPP
|
||||
FLAG = 0x7E
|
||||
ESC = 0x7D
|
||||
ESC_MASK = 0x20
|
||||
|
||||
@staticmethod
|
||||
def escape(data):
|
||||
data = data.replace(bytes([HDLC.ESC]), bytes([HDLC.ESC, HDLC.ESC^HDLC.ESC_MASK]))
|
||||
data = data.replace(bytes([HDLC.FLAG]), bytes([HDLC.ESC, HDLC.FLAG^HDLC.ESC_MASK]))
|
||||
return data
|
||||
|
||||
class SerialInterface(Interface):
|
||||
MAX_CHUNK = 32768
|
||||
|
||||
owner = None
|
||||
port = None
|
||||
speed = None
|
||||
databits = None
|
||||
parity = None
|
||||
stopbits = None
|
||||
serial = None
|
||||
|
||||
def __init__(self, owner, name, port, speed, databits, parity, stopbits):
|
||||
import importlib
|
||||
if RNS.vendor.platformutils.is_android():
|
||||
self.on_android = True
|
||||
if importlib.util.find_spec('usbserial4a') != None:
|
||||
if importlib.util.find_spec('jnius') == None:
|
||||
RNS.log("Could not load jnius API wrapper for Android, Serial interface cannot be created.", RNS.LOG_CRITICAL)
|
||||
RNS.log("This probably means you are trying to use an USB-based interface from within Termux or similar.", RNS.LOG_CRITICAL)
|
||||
RNS.log("This is currently not possible, due to this environment limiting access to the native Android APIs.", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
|
||||
from usbserial4a import serial4a as serial
|
||||
self.parity = "N"
|
||||
|
||||
else:
|
||||
RNS.log("Could not load USB serial module for Android, Serial interface cannot be created.", RNS.LOG_CRITICAL)
|
||||
RNS.log("You can install this module by issuing: pip install usbserial4a", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
else:
|
||||
raise SystemError("Android-specific interface was used on non-Android OS")
|
||||
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 564
|
||||
|
||||
self.pyserial = serial
|
||||
self.serial = None
|
||||
self.owner = owner
|
||||
self.name = name
|
||||
self.port = port
|
||||
self.speed = speed
|
||||
self.databits = databits
|
||||
self.parity = "N"
|
||||
self.stopbits = stopbits
|
||||
self.timeout = 100
|
||||
self.online = False
|
||||
self.bitrate = self.speed
|
||||
|
||||
if parity.lower() == "e" or parity.lower() == "even":
|
||||
self.parity = "E"
|
||||
|
||||
if parity.lower() == "o" or parity.lower() == "odd":
|
||||
self.parity = "O"
|
||||
|
||||
try:
|
||||
self.open_port()
|
||||
except Exception as e:
|
||||
RNS.log("Could not open serial port for interface "+str(self), RNS.LOG_ERROR)
|
||||
raise e
|
||||
|
||||
if self.serial.is_open:
|
||||
self.configure_device()
|
||||
else:
|
||||
raise IOError("Could not open serial port")
|
||||
|
||||
|
||||
def open_port(self):
|
||||
RNS.log("Opening serial port "+self.port+"...")
|
||||
# Get device parameters
|
||||
from usb4a import usb
|
||||
device = usb.get_usb_device(self.port)
|
||||
if device:
|
||||
vid = device.getVendorId()
|
||||
pid = device.getProductId()
|
||||
|
||||
# Driver overrides for speficic chips
|
||||
proxy = self.pyserial.get_serial_port
|
||||
if vid == 0x1A86 and pid == 0x55D4:
|
||||
# Force CDC driver for Qinheng CH34x
|
||||
RNS.log(str(self)+" using CDC driver for "+RNS.hexrep(vid)+":"+RNS.hexrep(pid), RNS.LOG_DEBUG)
|
||||
from usbserial4a.cdcacmserial4a import CdcAcmSerial
|
||||
proxy = CdcAcmSerial
|
||||
|
||||
self.serial = proxy(
|
||||
self.port,
|
||||
baudrate = self.speed,
|
||||
bytesize = self.databits,
|
||||
parity = self.parity,
|
||||
stopbits = self.stopbits,
|
||||
xonxoff = False,
|
||||
rtscts = False,
|
||||
timeout = None,
|
||||
inter_byte_timeout = None,
|
||||
# write_timeout = wtimeout,
|
||||
dsrdtr = False,
|
||||
)
|
||||
|
||||
if vid == 0x0403:
|
||||
# Hardware parameters for FTDI devices @ 115200 baud
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 16 * 1024
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 100
|
||||
self.serial.timeout = 0.1
|
||||
elif vid == 0x10C4:
|
||||
# Hardware parameters for SiLabs CP210x @ 115200 baud
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 12
|
||||
self.serial.timeout = 0.012
|
||||
elif vid == 0x1A86 and pid == 0x55D4:
|
||||
# Hardware parameters for Qinheng CH34x @ 115200 baud
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 12
|
||||
self.serial.timeout = 0.1
|
||||
else:
|
||||
# Default values
|
||||
self.serial.DEFAULT_READ_BUFFER_SIZE = 1 * 1024
|
||||
self.serial.USB_READ_TIMEOUT_MILLIS = 100
|
||||
self.serial.timeout = 0.1
|
||||
|
||||
RNS.log(str(self)+" USB read buffer size set to "+RNS.prettysize(self.serial.DEFAULT_READ_BUFFER_SIZE), RNS.LOG_DEBUG)
|
||||
RNS.log(str(self)+" USB read timeout set to "+str(self.serial.USB_READ_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
|
||||
RNS.log(str(self)+" USB write timeout set to "+str(self.serial.USB_WRITE_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
|
||||
|
||||
def configure_device(self):
|
||||
sleep(0.5)
|
||||
thread = threading.Thread(target=self.readLoop)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
self.online = True
|
||||
RNS.log("Serial port "+self.port+" is now open", RNS.LOG_VERBOSE)
|
||||
|
||||
|
||||
def processIncoming(self, data):
|
||||
self.rxb += len(data)
|
||||
def af():
|
||||
self.owner.inbound(data, self)
|
||||
threading.Thread(target=af, daemon=True).start()
|
||||
|
||||
def processOutgoing(self,data):
|
||||
if self.online:
|
||||
data = bytes([HDLC.FLAG])+HDLC.escape(data)+bytes([HDLC.FLAG])
|
||||
written = self.serial.write(data)
|
||||
self.txb += len(data)
|
||||
if written != len(data):
|
||||
raise IOError("Serial interface only wrote "+str(written)+" bytes of "+str(len(data)))
|
||||
|
||||
def readLoop(self):
|
||||
try:
|
||||
in_frame = False
|
||||
escape = False
|
||||
data_buffer = b""
|
||||
last_read_ms = int(time.time()*1000)
|
||||
|
||||
while self.serial.is_open:
|
||||
serial_bytes = self.serial.read()
|
||||
got = len(serial_bytes)
|
||||
|
||||
for byte in serial_bytes:
|
||||
last_read_ms = int(time.time()*1000)
|
||||
|
||||
if (in_frame and byte == HDLC.FLAG):
|
||||
in_frame = False
|
||||
self.processIncoming(data_buffer)
|
||||
elif (byte == HDLC.FLAG):
|
||||
in_frame = True
|
||||
data_buffer = b""
|
||||
elif (in_frame and len(data_buffer) < self.HW_MTU):
|
||||
if (byte == HDLC.ESC):
|
||||
escape = True
|
||||
else:
|
||||
if (escape):
|
||||
if (byte == HDLC.FLAG ^ HDLC.ESC_MASK):
|
||||
byte = HDLC.FLAG
|
||||
if (byte == HDLC.ESC ^ HDLC.ESC_MASK):
|
||||
byte = HDLC.ESC
|
||||
escape = False
|
||||
data_buffer = data_buffer+bytes([byte])
|
||||
|
||||
if got == 0:
|
||||
time_since_last = int(time.time()*1000) - last_read_ms
|
||||
if len(data_buffer) > 0 and time_since_last > self.timeout:
|
||||
data_buffer = b""
|
||||
in_frame = False
|
||||
escape = False
|
||||
# sleep(0.08)
|
||||
|
||||
except Exception as e:
|
||||
self.online = False
|
||||
RNS.log("A serial port error occurred, the contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
RNS.log("The interface "+str(self)+" experienced an unrecoverable error and is now offline.", RNS.LOG_ERROR)
|
||||
|
||||
if RNS.Reticulum.panic_on_interface_error:
|
||||
RNS.panic()
|
||||
|
||||
RNS.log("Reticulum will attempt to reconnect the interface periodically.", RNS.LOG_ERROR)
|
||||
|
||||
self.online = False
|
||||
self.serial.close()
|
||||
self.reconnect_port()
|
||||
|
||||
def reconnect_port(self):
|
||||
while not self.online:
|
||||
try:
|
||||
time.sleep(5)
|
||||
RNS.log("Attempting to reconnect serial port "+str(self.port)+" for "+str(self)+"...", RNS.LOG_VERBOSE)
|
||||
self.open_port()
|
||||
if self.serial.is_open:
|
||||
self.configure_device()
|
||||
except Exception as e:
|
||||
RNS.log("Error while reconnecting port, the contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
RNS.log("Reconnected serial port for "+str(self))
|
||||
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "SerialInterface["+self.name+"]"
|
|
@ -0,0 +1,27 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import os
|
||||
import glob
|
||||
|
||||
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
|
||||
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]
|
|
@ -21,8 +21,10 @@
|
|||
# SOFTWARE.
|
||||
|
||||
from .Interface import Interface
|
||||
from collections import deque
|
||||
import socketserver
|
||||
import threading
|
||||
import re
|
||||
import socket
|
||||
import struct
|
||||
import time
|
||||
|
@ -41,25 +43,53 @@ class AutoInterface(Interface):
|
|||
SCOPE_ORGANISATION = "8"
|
||||
SCOPE_GLOBAL = "e"
|
||||
|
||||
MULTICAST_PERMANENT_ADDRESS_TYPE = "0"
|
||||
MULTICAST_TEMPORARY_ADDRESS_TYPE = "1"
|
||||
|
||||
PEERING_TIMEOUT = 7.5
|
||||
|
||||
ALL_IGNORE_IFS = ["lo0"]
|
||||
DARWIN_IGNORE_IFS = ["awdl0", "llw0", "lo0", "en5"]
|
||||
ANDROID_IGNORE_IFS = ["dummy0", "lo", "tun0"]
|
||||
|
||||
BITRATE_GUESS = 10*1000*1000
|
||||
|
||||
def __init__(self, owner, name, group_id=None, discovery_scope=None, discovery_port=None, data_port=None, allowed_interfaces=None, ignored_interfaces=None, configured_bitrate=None):
|
||||
import importlib
|
||||
if importlib.util.find_spec('netifaces') != None:
|
||||
import netifaces
|
||||
else:
|
||||
RNS.log("Using AutoInterface requires the netifaces module.", RNS.LOG_CRITICAL)
|
||||
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
MULTI_IF_DEQUE_LEN = 48
|
||||
MULTI_IF_DEQUE_TTL = 0.75
|
||||
|
||||
self.netifaces = netifaces
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
def handler_factory(self, callback):
|
||||
def create_handler(*args, **keys):
|
||||
return AutoInterfaceHandler(callback, *args, **keys)
|
||||
return create_handler
|
||||
|
||||
def descope_linklocal(self, link_local_addr):
|
||||
# Drop scope specifier expressd as %ifname (macOS)
|
||||
link_local_addr = link_local_addr.split("%")[0]
|
||||
# Drop embedded scope specifier (NetBSD, OpenBSD)
|
||||
link_local_addr = re.sub(r"fe80:[0-9a-f]*::","fe80::", link_local_addr)
|
||||
return link_local_addr
|
||||
|
||||
def list_interfaces(self):
|
||||
ifs = self.netinfo.interfaces()
|
||||
return ifs
|
||||
|
||||
def list_addresses(self, ifname):
|
||||
ifas = self.netinfo.ifaddresses(ifname)
|
||||
return ifas
|
||||
|
||||
def interface_name_to_index(self, ifname):
|
||||
|
||||
# socket.if_nametoindex doesn't work with uuid interface names on windows, it wants the ethernet_0 style
|
||||
# we will just get the index from netinfo instead as it seems to work
|
||||
if RNS.vendor.platformutils.is_windows():
|
||||
return self.netinfo.interface_names_to_indexes()[ifname]
|
||||
|
||||
return socket.if_nametoindex(ifname)
|
||||
|
||||
def __init__(self, owner, name, group_id=None, discovery_scope=None, discovery_port=None, multicast_address_type=None, data_port=None, allowed_interfaces=None, ignored_interfaces=None, configured_bitrate=None):
|
||||
from RNS.vendor.ifaddr import niwrapper
|
||||
super().__init__()
|
||||
self.netinfo = niwrapper
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -70,16 +100,26 @@ class AutoInterface(Interface):
|
|||
self.peers = {}
|
||||
self.link_local_addresses = []
|
||||
self.adopted_interfaces = {}
|
||||
self.interface_servers = {}
|
||||
self.multicast_echoes = {}
|
||||
self.timed_out_interfaces = {}
|
||||
self.mif_deque = deque(maxlen=AutoInterface.MULTI_IF_DEQUE_LEN)
|
||||
self.mif_deque_times = deque(maxlen=AutoInterface.MULTI_IF_DEQUE_LEN)
|
||||
self.carrier_changed = False
|
||||
|
||||
self.outbound_udp_socket = None
|
||||
|
||||
self.announce_rate_target = None
|
||||
self.announce_interval = AutoInterface.PEERING_TIMEOUT/6.0
|
||||
self.peer_job_interval = AutoInterface.PEERING_TIMEOUT*1.1
|
||||
self.peering_timeout = AutoInterface.PEERING_TIMEOUT
|
||||
self.multicast_echo_timeout = AutoInterface.PEERING_TIMEOUT/2
|
||||
|
||||
# Increase peering timeout on Android, due to potential
|
||||
# low-power modes implemented on many chipsets.
|
||||
if RNS.vendor.platformutils.is_android():
|
||||
self.peering_timeout *= 3
|
||||
|
||||
if allowed_interfaces == None:
|
||||
self.allowed_interfaces = []
|
||||
else:
|
||||
|
@ -100,6 +140,15 @@ class AutoInterface(Interface):
|
|||
else:
|
||||
self.discovery_port = discovery_port
|
||||
|
||||
if multicast_address_type == None:
|
||||
self.multicast_address_type = AutoInterface.MULTICAST_TEMPORARY_ADDRESS_TYPE
|
||||
elif str(multicast_address_type).lower() == "temporary":
|
||||
self.multicast_address_type = AutoInterface.MULTICAST_TEMPORARY_ADDRESS_TYPE
|
||||
elif str(multicast_address_type).lower() == "permanent":
|
||||
self.multicast_address_type = AutoInterface.MULTICAST_PERMANENT_ADDRESS_TYPE
|
||||
else:
|
||||
self.multicast_address_type = AutoInterface.MULTICAST_TEMPORARY_ADDRESS_TYPE
|
||||
|
||||
if data_port == None:
|
||||
self.data_port = AutoInterface.DEFAULT_DATA_PORT
|
||||
else:
|
||||
|
@ -128,105 +177,129 @@ class AutoInterface(Interface):
|
|||
gt += ":"+"{:02x}".format(g[9]+(g[8]<<8))
|
||||
gt += ":"+"{:02x}".format(g[11]+(g[10]<<8))
|
||||
gt += ":"+"{:02x}".format(g[13]+(g[12]<<8))
|
||||
self.mcast_discovery_address = "ff1"+self.discovery_scope+":"+gt
|
||||
self.mcast_discovery_address = "ff"+self.multicast_address_type+self.discovery_scope+":"+gt
|
||||
|
||||
suitable_interfaces = 0
|
||||
for ifname in self.netifaces.interfaces():
|
||||
if RNS.vendor.platformutils.is_darwin() and ifname in AutoInterface.DARWIN_IGNORE_IFS and not ifname in self.allowed_interfaces:
|
||||
RNS.log(str(self)+" skipping Darwin AWDL or tethering interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
elif RNS.vendor.platformutils.is_darwin() and ifname == "lo0":
|
||||
RNS.log(str(self)+" skipping Darwin loopback interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
elif RNS.vendor.platformutils.is_android() and ifname in AutoInterface.ANDROID_IGNORE_IFS and not ifname in self.allowed_interfaces:
|
||||
RNS.log(str(self)+" skipping Android system interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
elif ifname in self.ignored_interfaces:
|
||||
RNS.log(str(self)+" ignoring disallowed interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
else:
|
||||
if len(self.allowed_interfaces) > 0 and not ifname in self.allowed_interfaces:
|
||||
RNS.log(str(self)+" ignoring interface "+str(ifname)+" since it was not allowed", RNS.LOG_EXTREME)
|
||||
for ifname in self.list_interfaces():
|
||||
try:
|
||||
if RNS.vendor.platformutils.is_darwin() and ifname in AutoInterface.DARWIN_IGNORE_IFS and not ifname in self.allowed_interfaces:
|
||||
RNS.log(str(self)+" skipping Darwin AWDL or tethering interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
elif RNS.vendor.platformutils.is_darwin() and ifname == "lo0":
|
||||
RNS.log(str(self)+" skipping Darwin loopback interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
elif RNS.vendor.platformutils.is_android() and ifname in AutoInterface.ANDROID_IGNORE_IFS and not ifname in self.allowed_interfaces:
|
||||
RNS.log(str(self)+" skipping Android system interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
elif ifname in self.ignored_interfaces:
|
||||
RNS.log(str(self)+" ignoring disallowed interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
elif ifname in AutoInterface.ALL_IGNORE_IFS:
|
||||
RNS.log(str(self)+" skipping interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
else:
|
||||
addresses = self.netifaces.ifaddresses(ifname)
|
||||
if self.netifaces.AF_INET6 in addresses:
|
||||
link_local_addr = None
|
||||
for address in addresses[self.netifaces.AF_INET6]:
|
||||
if "addr" in address:
|
||||
if address["addr"].startswith("fe80:"):
|
||||
link_local_addr = address["addr"]
|
||||
self.link_local_addresses.append(link_local_addr.split("%")[0])
|
||||
self.adopted_interfaces[ifname] = link_local_addr.split("%")[0]
|
||||
self.multicast_echoes[ifname] = time.time()
|
||||
RNS.log(str(self)+" Selecting link-local address "+str(link_local_addr)+" for interface "+str(ifname), RNS.LOG_EXTREME)
|
||||
if len(self.allowed_interfaces) > 0 and not ifname in self.allowed_interfaces:
|
||||
RNS.log(str(self)+" ignoring interface "+str(ifname)+" since it was not allowed", RNS.LOG_EXTREME)
|
||||
else:
|
||||
addresses = self.list_addresses(ifname)
|
||||
if self.netinfo.AF_INET6 in addresses:
|
||||
link_local_addr = None
|
||||
for address in addresses[self.netinfo.AF_INET6]:
|
||||
if "addr" in address:
|
||||
if address["addr"].startswith("fe80:"):
|
||||
link_local_addr = self.descope_linklocal(address["addr"])
|
||||
self.link_local_addresses.append(link_local_addr)
|
||||
self.adopted_interfaces[ifname] = link_local_addr
|
||||
self.multicast_echoes[ifname] = time.time()
|
||||
nice_name = self.netinfo.interface_name_to_nice_name(ifname)
|
||||
if nice_name != None and nice_name != ifname:
|
||||
RNS.log(f"{self} Selecting link-local address {link_local_addr} for interface {nice_name} / {ifname}", RNS.LOG_EXTREME)
|
||||
else:
|
||||
RNS.log(f"{self} Selecting link-local address {link_local_addr} for interface {ifname}", RNS.LOG_EXTREME)
|
||||
|
||||
if link_local_addr == None:
|
||||
RNS.log(str(self)+" No link-local IPv6 address configured for "+str(ifname)+", skipping interface", RNS.LOG_EXTREME)
|
||||
else:
|
||||
mcast_addr = self.mcast_discovery_address
|
||||
RNS.log(str(self)+" Creating multicast discovery listener on "+str(ifname)+" with address "+str(mcast_addr), RNS.LOG_EXTREME)
|
||||
if link_local_addr == None:
|
||||
RNS.log(str(self)+" No link-local IPv6 address configured for "+str(ifname)+", skipping interface", RNS.LOG_EXTREME)
|
||||
else:
|
||||
mcast_addr = self.mcast_discovery_address
|
||||
RNS.log(str(self)+" Creating multicast discovery listener on "+str(ifname)+" with address "+str(mcast_addr), RNS.LOG_EXTREME)
|
||||
|
||||
# Struct with interface index
|
||||
if_struct = struct.pack("I", socket.if_nametoindex(ifname))
|
||||
# Struct with interface index
|
||||
if_struct = struct.pack("I", self.interface_name_to_index(ifname))
|
||||
|
||||
# Set up multicast socket
|
||||
discovery_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
||||
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_IF, if_struct)
|
||||
# Set up multicast socket
|
||||
discovery_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
if hasattr(socket, "SO_REUSEPORT"):
|
||||
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
||||
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_IF, if_struct)
|
||||
|
||||
# Join multicast group
|
||||
mcast_group = socket.inet_pton(socket.AF_INET6, mcast_addr) + if_struct
|
||||
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mcast_group)
|
||||
# Join multicast group
|
||||
mcast_group = socket.inet_pton(socket.AF_INET6, mcast_addr) + if_struct
|
||||
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mcast_group)
|
||||
|
||||
# Bind socket
|
||||
addr_info = socket.getaddrinfo(mcast_addr+"%"+ifname, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
discovery_socket.bind(addr_info[0][4])
|
||||
# Bind socket
|
||||
if RNS.vendor.platformutils.is_windows():
|
||||
|
||||
# Set up thread for discovery packets
|
||||
def discovery_loop():
|
||||
self.discovery_handler(discovery_socket, ifname)
|
||||
# window throws "[WinError 10049] The requested address is not valid in its context"
|
||||
# when trying to use the multicast address as host, or when providing interface index
|
||||
# passing an empty host appears to work, but probably not exactly how we want it to...
|
||||
discovery_socket.bind(('', self.discovery_port))
|
||||
|
||||
thread = threading.Thread(target=discovery_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.start()
|
||||
else:
|
||||
|
||||
suitable_interfaces += 1
|
||||
if self.discovery_scope == AutoInterface.SCOPE_LINK:
|
||||
addr_info = socket.getaddrinfo(mcast_addr+"%"+ifname, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
else:
|
||||
addr_info = socket.getaddrinfo(mcast_addr, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
|
||||
discovery_socket.bind(addr_info[0][4])
|
||||
|
||||
# Set up thread for discovery packets
|
||||
def discovery_loop():
|
||||
self.discovery_handler(discovery_socket, ifname)
|
||||
|
||||
thread = threading.Thread(target=discovery_loop)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
suitable_interfaces += 1
|
||||
|
||||
except Exception as e:
|
||||
nice_name = self.netinfo.interface_name_to_nice_name(ifname)
|
||||
if nice_name != None and nice_name != ifname:
|
||||
RNS.log(f"Could not configure the system interface {nice_name} / {ifname} for use with {self}, skipping it. The contained exception was: {e}", RNS.LOG_ERROR)
|
||||
else:
|
||||
RNS.log(f"Could not configure the system interface {ifname} for use with {self}, skipping it. The contained exception was: {e}", RNS.LOG_ERROR)
|
||||
|
||||
if suitable_interfaces == 0:
|
||||
RNS.log(str(self)+" could not autoconfigure. This interface currently provides no connectivity.", RNS.LOG_WARNING)
|
||||
else:
|
||||
self.receives = True
|
||||
|
||||
if configured_bitrate != None:
|
||||
self.bitrate = configured_bitrate
|
||||
else:
|
||||
self.bitrate = AutoInterface.BITRATE_GUESS
|
||||
|
||||
peering_wait = self.announce_interval*1.2
|
||||
RNS.log(str(self)+" discovering peers for "+str(round(peering_wait, 2))+" seconds...", RNS.LOG_VERBOSE)
|
||||
|
||||
def handlerFactory(callback):
|
||||
def createHandler(*args, **keys):
|
||||
return AutoInterfaceHandler(callback, *args, **keys)
|
||||
return createHandler
|
||||
|
||||
self.owner = owner
|
||||
socketserver.UDPServer.address_family = socket.AF_INET6
|
||||
|
||||
for ifname in self.adopted_interfaces:
|
||||
local_addr = self.adopted_interfaces[ifname]+"%"+ifname
|
||||
local_addr = self.adopted_interfaces[ifname]+"%"+str(self.interface_name_to_index(ifname))
|
||||
addr_info = socket.getaddrinfo(local_addr, self.data_port, socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
address = addr_info[0][4]
|
||||
|
||||
self.server = socketserver.UDPServer(address, handlerFactory(self.processIncoming))
|
||||
udp_server = socketserver.UDPServer(address, self.handler_factory(self.processIncoming))
|
||||
self.interface_servers[ifname] = udp_server
|
||||
|
||||
thread = threading.Thread(target=self.server.serve_forever)
|
||||
thread.setDaemon(True)
|
||||
thread = threading.Thread(target=udp_server.serve_forever)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
job_thread = threading.Thread(target=self.peer_jobs)
|
||||
job_thread.setDaemon(True)
|
||||
job_thread.daemon = True
|
||||
job_thread.start()
|
||||
|
||||
time.sleep(peering_wait)
|
||||
|
||||
if configured_bitrate != None:
|
||||
self.bitrate = configured_bitrate
|
||||
else:
|
||||
self.bitrate = AutoInterface.BITRATE_GUESS
|
||||
|
||||
self.online = True
|
||||
|
||||
|
||||
|
@ -235,7 +308,7 @@ class AutoInterface(Interface):
|
|||
self.announce_handler(ifname)
|
||||
|
||||
thread = threading.Thread(target=announce_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
while True:
|
||||
|
@ -265,16 +338,62 @@ class AutoInterface(Interface):
|
|||
RNS.log(str(self)+" removed peer "+str(peer_addr)+" on "+str(removed_peer[0]), RNS.LOG_DEBUG)
|
||||
|
||||
for ifname in self.adopted_interfaces:
|
||||
# Check that the link-local address has not changed
|
||||
try:
|
||||
addresses = self.list_addresses(ifname)
|
||||
if self.netinfo.AF_INET6 in addresses:
|
||||
link_local_addr = None
|
||||
for address in addresses[self.netinfo.AF_INET6]:
|
||||
if "addr" in address:
|
||||
if address["addr"].startswith("fe80:"):
|
||||
link_local_addr = self.descope_linklocal(address["addr"])
|
||||
if link_local_addr != self.adopted_interfaces[ifname]:
|
||||
old_link_local_address = self.adopted_interfaces[ifname]
|
||||
RNS.log("Replacing link-local address "+str(old_link_local_address)+" for "+str(ifname)+" with "+str(link_local_addr), RNS.LOG_DEBUG)
|
||||
self.adopted_interfaces[ifname] = link_local_addr
|
||||
self.link_local_addresses.append(link_local_addr)
|
||||
|
||||
if old_link_local_address in self.link_local_addresses:
|
||||
self.link_local_addresses.remove(old_link_local_address)
|
||||
|
||||
local_addr = link_local_addr+"%"+ifname
|
||||
addr_info = socket.getaddrinfo(local_addr, self.data_port, socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
listen_address = addr_info[0][4]
|
||||
|
||||
if ifname in self.interface_servers:
|
||||
RNS.log("Shutting down previous UDP listener for "+str(self)+" "+str(ifname), RNS.LOG_DEBUG)
|
||||
previous_server = self.interface_servers[ifname]
|
||||
def shutdown_server():
|
||||
previous_server.shutdown()
|
||||
threading.Thread(target=shutdown_server, daemon=True).start()
|
||||
|
||||
RNS.log("Starting new UDP listener for "+str(self)+" "+str(ifname), RNS.LOG_DEBUG)
|
||||
|
||||
udp_server = socketserver.UDPServer(listen_address, self.handler_factory(self.processIncoming))
|
||||
self.interface_servers[ifname] = udp_server
|
||||
|
||||
thread = threading.Thread(target=udp_server.serve_forever)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
self.carrier_changed = True
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Could not get device information while updating link-local addresses for "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
# Check multicast echo timeouts
|
||||
last_multicast_echo = 0
|
||||
if ifname in self.multicast_echoes:
|
||||
last_multicast_echo = self.multicast_echoes[ifname]
|
||||
|
||||
if now - last_multicast_echo > self.multicast_echo_timeout:
|
||||
if ifname in self.timed_out_interfaces and self.timed_out_interfaces[ifname] == False:
|
||||
self.carrier_changed = True
|
||||
RNS.log("Multicast echo timeout for "+str(ifname)+". Carrier lost.", RNS.LOG_WARNING)
|
||||
self.timed_out_interfaces[ifname] = True
|
||||
else:
|
||||
if ifname in self.timed_out_interfaces and self.timed_out_interfaces[ifname] == True:
|
||||
self.carrier_changed = True
|
||||
RNS.log(str(self)+" Carrier recovered on "+str(ifname), RNS.LOG_WARNING)
|
||||
self.timed_out_interfaces[ifname] = False
|
||||
|
||||
|
@ -291,9 +410,11 @@ class AutoInterface(Interface):
|
|||
announce_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
addr_info = socket.getaddrinfo(self.mcast_discovery_address, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
|
||||
ifis = struct.pack("I", socket.if_nametoindex(ifname))
|
||||
ifis = struct.pack("I", self.interface_name_to_index(ifname))
|
||||
announce_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_IF, ifis)
|
||||
announce_socket.sendto(discovery_token, addr_info[0][4])
|
||||
announce_socket.close()
|
||||
|
||||
except Exception as e:
|
||||
if (ifname in self.timed_out_interfaces and self.timed_out_interfaces[ifname] == False) or not ifname in self.timed_out_interfaces:
|
||||
RNS.log(str(self)+" Detected possible carrier loss on "+str(ifname)+": "+str(e), RNS.LOG_WARNING)
|
||||
|
@ -323,18 +444,30 @@ class AutoInterface(Interface):
|
|||
self.peers[addr][1] = time.time()
|
||||
|
||||
def processIncoming(self, data):
|
||||
self.rxb += len(data)
|
||||
self.owner.inbound(data, self)
|
||||
data_hash = RNS.Identity.full_hash(data)
|
||||
deque_hit = False
|
||||
if data_hash in self.mif_deque:
|
||||
for te in self.mif_deque_times:
|
||||
if te[0] == data_hash and time.time() < te[1]+AutoInterface.MULTI_IF_DEQUE_TTL:
|
||||
deque_hit = True
|
||||
break
|
||||
|
||||
if not deque_hit:
|
||||
self.mif_deque.append(data_hash)
|
||||
self.mif_deque_times.append([data_hash, time.time()])
|
||||
self.rxb += len(data)
|
||||
self.owner.inbound(data, self)
|
||||
|
||||
def processOutgoing(self,data):
|
||||
for peer in self.peers:
|
||||
try:
|
||||
if self.outbound_udp_socket == None:
|
||||
self.outbound_udp_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
|
||||
peer_addr = str(peer)+"%"+str(self.peers[peer][0])
|
||||
|
||||
peer_addr = str(peer)+"%"+str(self.interface_name_to_index(self.peers[peer][0]))
|
||||
addr_info = socket.getaddrinfo(peer_addr, self.data_port, socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
self.outbound_udp_socket.sendto(data, addr_info[0][4])
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Could not transmit on "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
|
@ -342,6 +475,11 @@ class AutoInterface(Interface):
|
|||
self.txb += len(data)
|
||||
|
||||
|
||||
# Until per-device sub-interfacing is implemented,
|
||||
# ingress limiting should be disabled on AutoInterface
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "AutoInterface["+self.name+"]"
|
||||
|
||||
|
|
|
@ -107,8 +107,15 @@ class I2PController:
|
|||
|
||||
|
||||
def stop(self):
|
||||
for task in asyncio.Task.all_tasks(loop=self.loop):
|
||||
task.cancel()
|
||||
for i2ptunnel in self.i2plib_tunnels:
|
||||
if hasattr(i2ptunnel, "stop") and callable(i2ptunnel.stop):
|
||||
i2ptunnel.stop()
|
||||
|
||||
if hasattr(asyncio.Task, "all_tasks") and callable(asyncio.Task.all_tasks):
|
||||
for task in asyncio.Task.all_tasks(loop=self.loop):
|
||||
task.cancel()
|
||||
|
||||
time.sleep(0.2)
|
||||
|
||||
self.loop.stop()
|
||||
|
||||
|
@ -117,6 +124,10 @@ class I2PController:
|
|||
return self.i2plib.utils.get_free_port()
|
||||
|
||||
|
||||
def stop_tunnel(self, i2ptunnel):
|
||||
if hasattr(i2ptunnel, "stop") and callable(i2ptunnel.stop):
|
||||
i2ptunnel.stop()
|
||||
|
||||
def client_tunnel(self, owner, i2p_destination):
|
||||
self.client_tunnels[i2p_destination] = False
|
||||
self.i2plib_tunnels[i2p_destination] = None
|
||||
|
@ -146,11 +157,10 @@ class I2PController:
|
|||
RNS.log("Got status from I2P control process", RNS.LOG_EXTREME)
|
||||
|
||||
if tn.status["setup_failed"]:
|
||||
self.stop_tunnel(tn)
|
||||
raise tn.status["exception"]
|
||||
|
||||
else:
|
||||
self.client_tunnels[i2p_destination] = True
|
||||
owner.awaiting_i2p_tunnel = False
|
||||
if owner.socket != None:
|
||||
if hasattr(owner.socket, "close"):
|
||||
if callable(owner.socket.close):
|
||||
|
@ -163,6 +173,8 @@ class I2PController:
|
|||
owner.socket.close()
|
||||
except Exception as e:
|
||||
RNS.log("Error while closing socket for "+str(owner)+": "+str(e))
|
||||
self.client_tunnels[i2p_destination] = True
|
||||
owner.awaiting_i2p_tunnel = False
|
||||
|
||||
RNS.log(str(owner)+" tunnel setup complete", RNS.LOG_VERBOSE)
|
||||
|
||||
|
@ -182,27 +194,62 @@ class I2PController:
|
|||
else:
|
||||
i2ptunnel = self.i2plib_tunnels[i2p_destination]
|
||||
if hasattr(i2ptunnel, "status"):
|
||||
# TODO: Remove
|
||||
# RNS.log(str(i2ptunnel.status))
|
||||
i2p_exception = i2ptunnel.status["exception"]
|
||||
|
||||
if i2ptunnel.status["setup_ran"] == False:
|
||||
RNS.log(str(self)+" I2P tunnel setup did not complete", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
elif i2p_exception != None:
|
||||
RNS.log(str(self)+" An error ocurred while setting up I2P tunnel. The contained exception was: "+str(i2p_exception), RNS.LOG_ERROR)
|
||||
RNS.log("Resetting I2P tunnel", RNS.LOG_ERROR)
|
||||
RNS.log("An error ocurred while setting up I2P tunnel to "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
if isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.CantReachPeer):
|
||||
RNS.log("The I2P daemon can't reach peer "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedDest):
|
||||
RNS.log("The I2P daemon reported that the destination is already in use", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedId):
|
||||
RNS.log("The I2P daemon reported that the ID is arleady in use", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidId):
|
||||
RNS.log("The I2P daemon reported that the stream session ID doesn't exist", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidKey):
|
||||
RNS.log("The I2P daemon reported that the key for "+str(i2p_destination)+" is invalid", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.KeyNotFound):
|
||||
RNS.log("The I2P daemon could not find the key for "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.PeerNotFound):
|
||||
RNS.log("The I2P daemon mould not find the peer "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.I2PError):
|
||||
RNS.log("The I2P daemon experienced an unspecified error", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.Timeout):
|
||||
RNS.log("I2P daemon timed out while setting up client tunnel to "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
RNS.log("Resetting I2P tunnel and retrying later", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
elif i2ptunnel.status["setup_failed"] == True:
|
||||
RNS.log(str(self)+" Unspecified I2P tunnel setup error, resetting I2P tunnel", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
else:
|
||||
RNS.log(str(self)+" Got no status from SAM API, resetting I2P tunnel", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
# Wait for status from I2P control process
|
||||
time.sleep(5)
|
||||
|
||||
|
||||
|
@ -251,6 +298,7 @@ class I2PController:
|
|||
tunnel = self.i2plib.ServerTunnel((owner.bind_ip, owner.bind_port), loop=self.loop, destination=i2p_dest, sam_address=self.sam_address)
|
||||
self.i2plib_tunnels[i2p_b32] = tunnel
|
||||
await tunnel.run()
|
||||
owner.online = True
|
||||
RNS.log(str(owner)+ " endpoint setup complete. Now reachable at: "+str(i2p_dest.base32)+".b32.i2p", RNS.LOG_VERBOSE)
|
||||
|
||||
asyncio.run_coroutine_threadsafe(tunnel_up(), self.loop).result()
|
||||
|
@ -260,28 +308,61 @@ class I2PController:
|
|||
raise e
|
||||
|
||||
else:
|
||||
|
||||
i2ptunnel = self.i2plib_tunnels[i2p_b32]
|
||||
if hasattr(i2ptunnel, "status"):
|
||||
# TODO: Remove
|
||||
# RNS.log(str(i2ptunnel.status))
|
||||
i2p_exception = i2ptunnel.status["exception"]
|
||||
|
||||
if i2ptunnel.status["setup_ran"] == False:
|
||||
RNS.log(str(self)+" I2P tunnel setup did not complete", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
elif i2p_exception != None:
|
||||
RNS.log(str(self)+" An error ocurred while setting up I2P tunnel. The contained exception was: "+str(i2p_exception), RNS.LOG_ERROR)
|
||||
RNS.log("Resetting I2P tunnel", RNS.LOG_ERROR)
|
||||
RNS.log("An error ocurred while setting up I2P tunnel", RNS.LOG_ERROR)
|
||||
|
||||
if isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.CantReachPeer):
|
||||
RNS.log("The I2P daemon can't reach peer "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedDest):
|
||||
RNS.log("The I2P daemon reported that the destination is already in use", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedId):
|
||||
RNS.log("The I2P daemon reported that the ID is arleady in use", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidId):
|
||||
RNS.log("The I2P daemon reported that the stream session ID doesn't exist", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidKey):
|
||||
RNS.log("The I2P daemon reported that the key for "+str(i2p_destination)+" is invalid", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.KeyNotFound):
|
||||
RNS.log("The I2P daemon could not find the key for "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.PeerNotFound):
|
||||
RNS.log("The I2P daemon mould not find the peer "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.I2PError):
|
||||
RNS.log("The I2P daemon experienced an unspecified error", RNS.LOG_ERROR)
|
||||
|
||||
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.Timeout):
|
||||
RNS.log("I2P daemon timed out while setting up client tunnel to "+str(i2p_destination), RNS.LOG_ERROR)
|
||||
|
||||
RNS.log("Resetting I2P tunnel and retrying later", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
elif i2ptunnel.status["setup_failed"] == True:
|
||||
RNS.log(str(self)+" Unspecified I2P tunnel setup error, resetting I2P tunnel", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
else:
|
||||
RNS.log(str(self)+" Got no status from SAM API, resetting I2P tunnel", RNS.LOG_ERROR)
|
||||
|
||||
self.stop_tunnel(i2ptunnel)
|
||||
return False
|
||||
|
||||
time.sleep(5)
|
||||
|
@ -298,14 +379,18 @@ class I2PInterfacePeer(Interface):
|
|||
RECONNECT_MAX_TRIES = None
|
||||
|
||||
# TCP socket options
|
||||
I2P_USER_TIMEOUT = 40
|
||||
I2P_USER_TIMEOUT = 45
|
||||
I2P_PROBE_AFTER = 10
|
||||
I2P_PROBE_INTERVAL = 5
|
||||
I2P_PROBES = 6
|
||||
I2P_PROBE_INTERVAL = 9
|
||||
I2P_PROBES = 5
|
||||
I2P_READ_TIMEOUT = (I2P_PROBE_INTERVAL * I2P_PROBES + I2P_PROBE_AFTER)*2
|
||||
|
||||
TUNNEL_STATE_INIT = 0x00
|
||||
TUNNEL_STATE_ACTIVE = 0x01
|
||||
TUNNEL_STATE_STALE = 0x02
|
||||
|
||||
def __init__(self, parent_interface, owner, name, target_i2p_dest=None, connected_socket=None, max_reconnect_tries=None):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -328,6 +413,30 @@ class I2PInterfacePeer(Interface):
|
|||
self.i2p_tunnel_ready = False
|
||||
self.mode = RNS.Interfaces.Interface.Interface.MODE_FULL
|
||||
self.bitrate = I2PInterface.BITRATE_GUESS
|
||||
self.last_read = 0
|
||||
self.last_write = 0
|
||||
self.wd_reset = False
|
||||
self.i2p_tunnel_state = I2PInterfacePeer.TUNNEL_STATE_INIT
|
||||
|
||||
self.ifac_size = self.parent_interface.ifac_size
|
||||
self.ifac_netname = self.parent_interface.ifac_netname
|
||||
self.ifac_netkey = self.parent_interface.ifac_netkey
|
||||
if self.ifac_netname != None or self.ifac_netkey != None:
|
||||
ifac_origin = b""
|
||||
if self.ifac_netname != None:
|
||||
ifac_origin += RNS.Identity.full_hash(self.ifac_netname.encode("utf-8"))
|
||||
if self.ifac_netkey != None:
|
||||
ifac_origin += RNS.Identity.full_hash(self.ifac_netkey.encode("utf-8"))
|
||||
|
||||
ifac_origin_hash = RNS.Identity.full_hash(ifac_origin)
|
||||
self.ifac_key = RNS.Cryptography.hkdf(
|
||||
length=64,
|
||||
derive_from=ifac_origin_hash,
|
||||
salt=RNS.Reticulum.IFAC_SALT,
|
||||
context=None
|
||||
)
|
||||
self.ifac_identity = RNS.Identity.from_bytes(self.ifac_key)
|
||||
self.ifac_signature = self.ifac_identity.sign(RNS.Identity.full_hash(self.ifac_key))
|
||||
|
||||
self.announce_rate_target = None
|
||||
self.announce_rate_grace = None
|
||||
|
@ -373,46 +482,40 @@ class I2PInterfacePeer(Interface):
|
|||
RNS.log("Error while while configuring "+str(self)+": "+str(e), RNS.LOG_ERROR)
|
||||
RNS.log("Check that I2P is installed and running, and that SAM is enabled. Retrying tunnel setup later.", RNS.LOG_ERROR)
|
||||
|
||||
time.sleep(15)
|
||||
time.sleep(8)
|
||||
|
||||
thread = threading.Thread(target=tunnel_job)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
def wait_job():
|
||||
while self.awaiting_i2p_tunnel:
|
||||
time.sleep(0.25)
|
||||
time.sleep(2)
|
||||
|
||||
if not self.kiss_framing:
|
||||
self.wants_tunnel = True
|
||||
|
||||
if not self.connect(initial=True):
|
||||
thread = threading.Thread(target=self.reconnect)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
else:
|
||||
thread = threading.Thread(target=self.read_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
thread = threading.Thread(target=wait_job)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
|
||||
def set_timeouts_linux(self):
|
||||
if not self.i2p_tunneled:
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(I2PInterfacePeer.TCP_USER_TIMEOUT * 1000))
|
||||
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(I2PInterfacePeer.TCP_PROBE_AFTER))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(I2PInterfacePeer.TCP_PROBE_INTERVAL))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(I2PInterfacePeer.TCP_PROBES))
|
||||
else:
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(I2PInterfacePeer.I2P_USER_TIMEOUT * 1000))
|
||||
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(I2PInterfacePeer.I2P_PROBE_INTERVAL))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(I2PInterfacePeer.I2P_PROBES))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(I2PInterfacePeer.I2P_USER_TIMEOUT * 1000))
|
||||
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(I2PInterfacePeer.I2P_PROBE_INTERVAL))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(I2PInterfacePeer.I2P_PROBES))
|
||||
|
||||
def set_timeouts_osx(self):
|
||||
if hasattr(socket, "TCP_KEEPALIVE"):
|
||||
|
@ -421,17 +524,27 @@ class I2PInterfacePeer(Interface):
|
|||
TCP_KEEPIDLE = 0x10
|
||||
|
||||
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
|
||||
if not self.i2p_tunneled:
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, TCP_KEEPIDLE, int(I2PInterfacePeer.TCP_PROBE_AFTER))
|
||||
else:
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
|
||||
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
|
||||
|
||||
def shutdown_socket(self, target_socket):
|
||||
if callable(target_socket.close):
|
||||
try:
|
||||
if socket != None:
|
||||
target_socket.shutdown(socket.SHUT_RDWR)
|
||||
except Exception as e:
|
||||
RNS.log("Error while shutting down socket for "+str(self)+": "+str(e))
|
||||
|
||||
try:
|
||||
if socket != None:
|
||||
target_socket.close()
|
||||
except Exception as e:
|
||||
RNS.log("Error while closing socket for "+str(self)+": "+str(e))
|
||||
|
||||
def detach(self):
|
||||
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
|
||||
if self.socket != None:
|
||||
if hasattr(self.socket, "close"):
|
||||
if callable(self.socket.close):
|
||||
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
|
||||
self.detached = True
|
||||
|
||||
try:
|
||||
|
@ -477,7 +590,6 @@ class I2PInterfacePeer(Interface):
|
|||
|
||||
return True
|
||||
|
||||
|
||||
def reconnect(self):
|
||||
if self.initiator:
|
||||
if not self.reconnecting:
|
||||
|
@ -506,7 +618,7 @@ class I2PInterfacePeer(Interface):
|
|||
|
||||
self.reconnecting = False
|
||||
thread = threading.Thread(target=self.read_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
if not self.kiss_framing:
|
||||
RNS.Transport.synthesize_tunnel(self)
|
||||
|
@ -525,7 +637,7 @@ class I2PInterfacePeer(Interface):
|
|||
def processOutgoing(self, data):
|
||||
if self.online:
|
||||
while self.writing:
|
||||
time.sleep(0.01)
|
||||
time.sleep(0.001)
|
||||
|
||||
try:
|
||||
self.writing = True
|
||||
|
@ -538,6 +650,8 @@ class I2PInterfacePeer(Interface):
|
|||
self.socket.sendall(data)
|
||||
self.writing = False
|
||||
self.txb += len(data)
|
||||
self.last_write = time.time()
|
||||
|
||||
if hasattr(self, "parent_interface") and self.parent_interface != None and self.parent_count:
|
||||
self.parent_interface.txb += len(data)
|
||||
|
||||
|
@ -547,8 +661,59 @@ class I2PInterfacePeer(Interface):
|
|||
self.teardown()
|
||||
|
||||
|
||||
def read_watchdog(self):
|
||||
while self.wd_reset:
|
||||
time.sleep(0.25)
|
||||
|
||||
should_run = True
|
||||
try:
|
||||
while should_run and not self.wd_reset:
|
||||
time.sleep(1)
|
||||
|
||||
if (time.time()-self.last_read > I2PInterfacePeer.I2P_PROBE_AFTER*2):
|
||||
if self.i2p_tunnel_state != I2PInterfacePeer.TUNNEL_STATE_STALE:
|
||||
RNS.log("I2P tunnel became unresponsive", RNS.LOG_DEBUG)
|
||||
|
||||
self.i2p_tunnel_state = I2PInterfacePeer.TUNNEL_STATE_STALE
|
||||
else:
|
||||
self.i2p_tunnel_state = I2PInterfacePeer.TUNNEL_STATE_ACTIVE
|
||||
|
||||
if (time.time()-self.last_write > I2PInterfacePeer.I2P_PROBE_AFTER*1):
|
||||
try:
|
||||
if self.socket != None:
|
||||
self.socket.sendall(bytes([HDLC.FLAG, HDLC.FLAG]))
|
||||
except Exception as e:
|
||||
RNS.log("An error ocurred while sending I2P keepalive. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
self.shutdown_socket(self.socket)
|
||||
should_run = False
|
||||
|
||||
if (time.time()-self.last_read > I2PInterfacePeer.I2P_READ_TIMEOUT):
|
||||
RNS.log("I2P socket is unresponsive, restarting...", RNS.LOG_WARNING)
|
||||
if self.socket != None:
|
||||
try:
|
||||
self.socket.shutdown(socket.SHUT_RDWR)
|
||||
except Exception as e:
|
||||
RNS.log("Error while shutting down socket for "+str(self)+": "+str(e))
|
||||
|
||||
try:
|
||||
self.socket.close()
|
||||
except Exception as e:
|
||||
RNS.log("Error while closing socket for "+str(self)+": "+str(e))
|
||||
|
||||
should_run = False
|
||||
|
||||
self.wd_reset = False
|
||||
|
||||
finally:
|
||||
self.wd_reset = False
|
||||
|
||||
def read_loop(self):
|
||||
try:
|
||||
self.last_read = time.time()
|
||||
self.last_write = time.time()
|
||||
|
||||
wd_thread = threading.Thread(target=self.read_watchdog, daemon=True).start()
|
||||
|
||||
in_frame = False
|
||||
escape = False
|
||||
data_buffer = b""
|
||||
|
@ -558,6 +723,7 @@ class I2PInterfacePeer(Interface):
|
|||
data_in = self.socket.recv(4096)
|
||||
if len(data_in) > 0:
|
||||
pointer = 0
|
||||
self.last_read = time.time()
|
||||
while pointer < len(data_in):
|
||||
byte = data_in[pointer]
|
||||
pointer += 1
|
||||
|
@ -610,6 +776,11 @@ class I2PInterfacePeer(Interface):
|
|||
data_buffer = data_buffer+bytes([byte])
|
||||
else:
|
||||
self.online = False
|
||||
|
||||
self.wd_reset = True
|
||||
time.sleep(2)
|
||||
self.wd_reset = False
|
||||
|
||||
if self.initiator and not self.detached:
|
||||
RNS.log("Socket for "+str(self)+" was closed, attempting to reconnect...", RNS.LOG_WARNING)
|
||||
self.reconnect()
|
||||
|
@ -659,9 +830,8 @@ class I2PInterfacePeer(Interface):
|
|||
class I2PInterface(Interface):
|
||||
BITRATE_GUESS = 256*1000
|
||||
|
||||
def __init__(self, owner, name, rns_storagepath, peers, connectable = False):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
def __init__(self, owner, name, rns_storagepath, peers, connectable = False, ifac_size = 16, ifac_netname = None, ifac_netkey = None):
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -685,11 +855,14 @@ class I2PInterface(Interface):
|
|||
self.bind_port = self.i2p.get_free_port()
|
||||
self.address = (self.bind_ip, self.bind_port)
|
||||
self.bitrate = I2PInterface.BITRATE_GUESS
|
||||
self.ifac_size = ifac_size
|
||||
self.ifac_netname = ifac_netname
|
||||
self.ifac_netkey = ifac_netkey
|
||||
|
||||
self.online = False
|
||||
|
||||
i2p_thread = threading.Thread(target=self.i2p.start)
|
||||
i2p_thread.setDaemon(True)
|
||||
i2p_thread.daemon = True
|
||||
i2p_thread.start()
|
||||
|
||||
i2p_notready_warning = False
|
||||
|
@ -714,7 +887,7 @@ class I2PInterface(Interface):
|
|||
self.server = ThreadingI2PServer(self.address, handlerFactory(self.incoming_connection))
|
||||
|
||||
thread = threading.Thread(target=self.server.serve_forever)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
if self.connectable:
|
||||
|
@ -733,7 +906,7 @@ class I2PInterface(Interface):
|
|||
|
||||
|
||||
thread = threading.Thread(target=tunnel_job)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
if peers != None:
|
||||
|
@ -755,9 +928,27 @@ class I2PInterface(Interface):
|
|||
spawned_interface.parent_interface = self
|
||||
spawned_interface.online = True
|
||||
spawned_interface.bitrate = self.bitrate
|
||||
|
||||
spawned_interface.ifac_size = self.ifac_size
|
||||
spawned_interface.ifac_netname = self.ifac_netname
|
||||
spawned_interface.ifac_netkey = self.ifac_netkey
|
||||
if spawned_interface.ifac_netname != None or spawned_interface.ifac_netkey != None:
|
||||
ifac_origin = b""
|
||||
if spawned_interface.ifac_netname != None:
|
||||
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netname.encode("utf-8"))
|
||||
if spawned_interface.ifac_netkey != None:
|
||||
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netkey.encode("utf-8"))
|
||||
|
||||
ifac_origin_hash = RNS.Identity.full_hash(ifac_origin)
|
||||
spawned_interface.ifac_key = RNS.Cryptography.hkdf(
|
||||
length=64,
|
||||
derive_from=ifac_origin_hash,
|
||||
salt=RNS.Reticulum.IFAC_SALT,
|
||||
context=None
|
||||
)
|
||||
spawned_interface.ifac_identity = RNS.Identity.from_bytes(spawned_interface.ifac_key)
|
||||
spawned_interface.ifac_signature = spawned_interface.ifac_identity.sign(RNS.Identity.full_hash(spawned_interface.ifac_key))
|
||||
|
||||
spawned_interface.announce_rate_target = self.announce_rate_target
|
||||
spawned_interface.announce_rate_grace = self.announce_rate_grace
|
||||
spawned_interface.announce_rate_penalty = self.announce_rate_penalty
|
||||
|
@ -771,7 +962,14 @@ class I2PInterface(Interface):
|
|||
def processOutgoing(self, data):
|
||||
pass
|
||||
|
||||
def received_announce(self, from_spawned=False):
|
||||
if from_spawned: self.ia_freq_deque.append(time.time())
|
||||
|
||||
def sent_announce(self, from_spawned=False):
|
||||
if from_spawned: self.oa_freq_deque.append(time.time())
|
||||
|
||||
def detach(self):
|
||||
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
|
||||
self.i2p.stop()
|
||||
|
||||
def __str__(self):
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -23,6 +23,7 @@
|
|||
import RNS
|
||||
import time
|
||||
import threading
|
||||
from collections import deque
|
||||
|
||||
class Interface:
|
||||
IN = False
|
||||
|
@ -39,18 +40,161 @@ class Interface:
|
|||
MODE_BOUNDARY = 0x05
|
||||
MODE_GATEWAY = 0x06
|
||||
|
||||
# Which interface modes a Transport Node
|
||||
# should actively discover paths for.
|
||||
DISCOVER_PATHS_FOR = [MODE_ACCESS_POINT, MODE_GATEWAY]
|
||||
# Which interface modes a Transport Node should
|
||||
# actively discover paths for.
|
||||
DISCOVER_PATHS_FOR = [MODE_ACCESS_POINT, MODE_GATEWAY, MODE_ROAMING]
|
||||
|
||||
# How many samples to use for announce
|
||||
# frequency calculations
|
||||
IA_FREQ_SAMPLES = 6
|
||||
OA_FREQ_SAMPLES = 6
|
||||
|
||||
# Maximum amount of ingress limited announces
|
||||
# to hold at any given time.
|
||||
MAX_HELD_ANNOUNCES = 256
|
||||
|
||||
# How long a spawned interface will be
|
||||
# considered to be newly created. Two
|
||||
# hours by default.
|
||||
IC_NEW_TIME = 2*60*60
|
||||
IC_BURST_FREQ_NEW = 3.5
|
||||
IC_BURST_FREQ = 12
|
||||
IC_BURST_HOLD = 1*60
|
||||
IC_BURST_PENALTY = 5*60
|
||||
IC_HELD_RELEASE_INTERVAL = 30
|
||||
|
||||
def __init__(self):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
self.created = time.time()
|
||||
self.online = False
|
||||
self.bitrate = 1e6
|
||||
|
||||
self.ingress_control = True
|
||||
self.ic_max_held_announces = Interface.MAX_HELD_ANNOUNCES
|
||||
self.ic_burst_hold = Interface.IC_BURST_HOLD
|
||||
self.ic_burst_active = False
|
||||
self.ic_burst_activated = 0
|
||||
self.ic_held_release = 0
|
||||
self.ic_burst_freq_new = Interface.IC_BURST_FREQ_NEW
|
||||
self.ic_burst_freq = Interface.IC_BURST_FREQ
|
||||
self.ic_new_time = Interface.IC_NEW_TIME
|
||||
self.ic_burst_penalty = Interface.IC_BURST_PENALTY
|
||||
self.ic_held_release_interval = Interface.IC_HELD_RELEASE_INTERVAL
|
||||
self.held_announces = {}
|
||||
|
||||
self.ia_freq_deque = deque(maxlen=Interface.IA_FREQ_SAMPLES)
|
||||
self.oa_freq_deque = deque(maxlen=Interface.OA_FREQ_SAMPLES)
|
||||
|
||||
def get_hash(self):
|
||||
return RNS.Identity.full_hash(str(self).encode("utf-8"))
|
||||
|
||||
# This is a generic function for determining when an interface
|
||||
# should activate ingress limiting. Since this can vary for
|
||||
# different interface types, this function should be overwritten
|
||||
# in case a particular interface requires a different approach.
|
||||
def should_ingress_limit(self):
|
||||
if self.ingress_control:
|
||||
freq_threshold = self.ic_burst_freq_new if self.age() < self.ic_new_time else self.ic_burst_freq
|
||||
ia_freq = self.incoming_announce_frequency()
|
||||
|
||||
if self.ic_burst_active:
|
||||
if ia_freq < freq_threshold and time.time() > self.ic_burst_activated+self.ic_burst_hold:
|
||||
self.ic_burst_active = False
|
||||
self.ic_held_release = time.time() + self.ic_burst_penalty
|
||||
return True
|
||||
|
||||
else:
|
||||
if ia_freq > freq_threshold:
|
||||
self.ic_burst_active = True
|
||||
self.ic_burst_activated = time.time()
|
||||
return True
|
||||
|
||||
else:
|
||||
return False
|
||||
|
||||
else:
|
||||
return False
|
||||
|
||||
def age(self):
|
||||
return time.time()-self.created
|
||||
|
||||
def hold_announce(self, announce_packet):
|
||||
if announce_packet.destination_hash in self.held_announces:
|
||||
self.held_announces[announce_packet.destination_hash] = announce_packet
|
||||
elif not len(self.held_announces) >= self.ic_max_held_announces:
|
||||
self.held_announces[announce_packet.destination_hash] = announce_packet
|
||||
|
||||
def process_held_announces(self):
|
||||
try:
|
||||
if not self.should_ingress_limit() and len(self.held_announces) > 0 and time.time() > self.ic_held_release:
|
||||
freq_threshold = self.ic_burst_freq_new if self.age() < self.ic_new_time else self.ic_burst_freq
|
||||
ia_freq = self.incoming_announce_frequency()
|
||||
if ia_freq < freq_threshold:
|
||||
selected_announce_packet = None
|
||||
min_hops = RNS.Transport.PATHFINDER_M
|
||||
for destination_hash in self.held_announces:
|
||||
announce_packet = self.held_announces[destination_hash]
|
||||
if announce_packet.hops < min_hops:
|
||||
min_hops = announce_packet.hops
|
||||
selected_announce_packet = announce_packet
|
||||
|
||||
if selected_announce_packet != None:
|
||||
RNS.log("Releasing held announce packet "+str(selected_announce_packet)+" from "+str(self), RNS.LOG_EXTREME)
|
||||
self.ic_held_release = time.time() + self.ic_held_release_interval
|
||||
self.held_announces.pop(selected_announce_packet.destination_hash)
|
||||
def release():
|
||||
RNS.Transport.inbound(selected_announce_packet.raw, selected_announce_packet.receiving_interface)
|
||||
threading.Thread(target=release, daemon=True).start()
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("An error occurred while processing held announces for "+str(self), RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
def received_announce(self):
|
||||
self.ia_freq_deque.append(time.time())
|
||||
if hasattr(self, "parent_interface") and self.parent_interface != None:
|
||||
self.parent_interface.received_announce(from_spawned=True)
|
||||
|
||||
def sent_announce(self):
|
||||
self.oa_freq_deque.append(time.time())
|
||||
if hasattr(self, "parent_interface") and self.parent_interface != None:
|
||||
self.parent_interface.sent_announce(from_spawned=True)
|
||||
|
||||
def incoming_announce_frequency(self):
|
||||
if not len(self.ia_freq_deque) > 1:
|
||||
return 0
|
||||
else:
|
||||
dq_len = len(self.ia_freq_deque)
|
||||
delta_sum = 0
|
||||
for i in range(1,dq_len):
|
||||
delta_sum += self.ia_freq_deque[i]-self.ia_freq_deque[i-1]
|
||||
delta_sum += time.time() - self.ia_freq_deque[dq_len-1]
|
||||
|
||||
if delta_sum == 0:
|
||||
avg = 0
|
||||
else:
|
||||
avg = 1/(delta_sum/(dq_len))
|
||||
|
||||
return avg
|
||||
|
||||
def outgoing_announce_frequency(self):
|
||||
if not len(self.oa_freq_deque) > 1:
|
||||
return 0
|
||||
else:
|
||||
dq_len = len(self.oa_freq_deque)
|
||||
delta_sum = 0
|
||||
for i in range(1,dq_len):
|
||||
delta_sum += self.oa_freq_deque[i]-self.oa_freq_deque[i-1]
|
||||
delta_sum += time.time() - self.oa_freq_deque[dq_len-1]
|
||||
|
||||
if delta_sum == 0:
|
||||
avg = 0
|
||||
else:
|
||||
avg = 1/(delta_sum/(dq_len))
|
||||
|
||||
return avg
|
||||
|
||||
def process_announce_queue(self):
|
||||
if not hasattr(self, "announce_cap"):
|
||||
self.announce_cap = RNS.Reticulum.ANNOUNCE_CAP
|
||||
|
@ -79,6 +223,7 @@ class Interface:
|
|||
self.announce_allowed_at = now + wait_time
|
||||
|
||||
self.processOutgoing(selected["raw"])
|
||||
self.sent_announce()
|
||||
|
||||
if selected in self.announce_queue:
|
||||
self.announce_queue.remove(selected)
|
||||
|
|
|
@ -70,8 +70,7 @@ class KISSInterface(Interface):
|
|||
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 564
|
||||
|
||||
|
@ -144,7 +143,7 @@ class KISSInterface(Interface):
|
|||
# Allow time for interface to initialise before config
|
||||
sleep(2.0)
|
||||
thread = threading.Thread(target=self.readLoop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
self.online = True
|
||||
RNS.log("Serial port "+self.port+" is now open")
|
||||
|
@ -349,5 +348,8 @@ class KISSInterface(Interface):
|
|||
|
||||
RNS.log("Reconnected serial port for "+str(self))
|
||||
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "KISSInterface["+self.name+"]"
|
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -28,6 +28,7 @@ import time
|
|||
import sys
|
||||
import os
|
||||
import RNS
|
||||
from threading import Lock
|
||||
|
||||
class HDLC():
|
||||
FLAG = 0x7E
|
||||
|
@ -41,14 +42,22 @@ class HDLC():
|
|||
return data
|
||||
|
||||
class ThreadingTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
|
||||
pass
|
||||
def server_bind(self):
|
||||
if RNS.vendor.platformutils.is_windows():
|
||||
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)
|
||||
else:
|
||||
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
self.socket.bind(self.server_address)
|
||||
self.server_address = self.socket.getsockname()
|
||||
|
||||
class LocalClientInterface(Interface):
|
||||
RECONNECT_WAIT = 3
|
||||
RECONNECT_WAIT = 8
|
||||
|
||||
def __init__(self, owner, name, target_port = None, connected_socket=None):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
# TODO: Remove at some point
|
||||
# self.rxptime = 0
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -69,6 +78,7 @@ class LocalClientInterface(Interface):
|
|||
self.target_ip = None
|
||||
self.target_port = None
|
||||
self.socket = connected_socket
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
|
||||
|
||||
self.is_connected_to_shared_instance = False
|
||||
|
||||
|
@ -83,17 +93,23 @@ class LocalClientInterface(Interface):
|
|||
self.online = True
|
||||
self.writing = False
|
||||
|
||||
self._force_bitrate = False
|
||||
|
||||
self.announce_rate_target = None
|
||||
self.announce_rate_grace = None
|
||||
self.announce_rate_penalty = None
|
||||
|
||||
if connected_socket == None:
|
||||
thread = threading.Thread(target=self.read_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def connect(self):
|
||||
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
|
||||
self.socket.connect((self.target_ip, self.target_port))
|
||||
|
||||
self.online = True
|
||||
|
@ -120,13 +136,16 @@ class LocalClientInterface(Interface):
|
|||
RNS.log("Connection attempt for "+str(self)+" failed: "+str(e), RNS.LOG_DEBUG)
|
||||
|
||||
if not self.never_connected:
|
||||
RNS.log("Reconnected TCP socket for "+str(self)+".", RNS.LOG_INFO)
|
||||
RNS.log("Reconnected socket for "+str(self)+".", RNS.LOG_INFO)
|
||||
|
||||
self.reconnecting = False
|
||||
thread = threading.Thread(target=self.read_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
RNS.Transport.shared_connection_reappeared()
|
||||
def job():
|
||||
time.sleep(LocalClientInterface.RECONNECT_WAIT+2)
|
||||
RNS.Transport.shared_connection_reappeared()
|
||||
threading.Thread(target=job, daemon=True).start()
|
||||
|
||||
else:
|
||||
RNS.log("Attempt to reconnect on a non-initiator shared local interface. This should not happen.", RNS.LOG_ERROR)
|
||||
|
@ -137,17 +156,30 @@ class LocalClientInterface(Interface):
|
|||
self.rxb += len(data)
|
||||
if hasattr(self, "parent_interface") and self.parent_interface != None:
|
||||
self.parent_interface.rxb += len(data)
|
||||
|
||||
|
||||
# TODO: Remove at some point
|
||||
# processing_start = time.time()
|
||||
|
||||
self.owner.inbound(data, self)
|
||||
|
||||
# TODO: Remove at some point
|
||||
# duration = time.time() - processing_start
|
||||
# self.rxptime += duration
|
||||
|
||||
def processOutgoing(self, data):
|
||||
if self.online:
|
||||
while self.writing:
|
||||
time.sleep(0.01)
|
||||
|
||||
try:
|
||||
self.writing = True
|
||||
|
||||
if self._force_bitrate:
|
||||
if not hasattr(self, "send_lock"):
|
||||
self.send_lock = Lock()
|
||||
|
||||
with self.send_lock:
|
||||
s = len(data) / self.bitrate * 8
|
||||
RNS.log(f"Simulating latency of {RNS.prettytime(s)} for {len(data)} bytes")
|
||||
time.sleep(s)
|
||||
|
||||
data = bytes([HDLC.FLAG])+HDLC.escape(data)+bytes([HDLC.FLAG])
|
||||
self.socket.sendall(data)
|
||||
self.writing = False
|
||||
|
@ -240,6 +272,8 @@ class LocalClientInterface(Interface):
|
|||
RNS.Transport.local_client_interfaces.remove(self)
|
||||
if hasattr(self, "parent_interface") and self.parent_interface != None:
|
||||
self.parent_interface.clients -= 1
|
||||
if hasattr(RNS.Transport, "owner") and RNS.Transport.owner != None:
|
||||
RNS.Transport.owner._should_persist_data()
|
||||
|
||||
if nowarning == False:
|
||||
RNS.log("The interface "+str(self)+" experienced an unrecoverable error and is being torn down. Restart Reticulum to attempt to open this interface again.", RNS.LOG_ERROR)
|
||||
|
@ -260,8 +294,7 @@ class LocalClientInterface(Interface):
|
|||
class LocalServerInterface(Interface):
|
||||
|
||||
def __init__(self, owner, bindport=None):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
self.online = False
|
||||
self.clients = 0
|
||||
|
||||
|
@ -285,11 +318,10 @@ class LocalServerInterface(Interface):
|
|||
|
||||
address = (self.bind_ip, self.bind_port)
|
||||
|
||||
ThreadingTCPServer.allow_reuse_address = True
|
||||
self.server = ThreadingTCPServer(address, handlerFactory(self.incoming_connection))
|
||||
|
||||
thread = threading.Thread(target=self.server.serve_forever)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
self.announce_rate_target = None
|
||||
|
@ -310,7 +342,9 @@ class LocalServerInterface(Interface):
|
|||
spawned_interface.target_port = str(handler.client_address[1])
|
||||
spawned_interface.parent_interface = self
|
||||
spawned_interface.bitrate = self.bitrate
|
||||
RNS.log("Accepting new connection to shared instance: "+str(spawned_interface), RNS.LOG_EXTREME)
|
||||
if hasattr(self, "_force_bitrate"):
|
||||
spawned_interface._force_bitrate = self._force_bitrate
|
||||
# RNS.log("Accepting new connection to shared instance: "+str(spawned_interface), RNS.LOG_EXTREME)
|
||||
RNS.Transport.interfaces.append(spawned_interface)
|
||||
RNS.Transport.local_client_interfaces.append(spawned_interface)
|
||||
self.clients += 1
|
||||
|
@ -319,6 +353,12 @@ class LocalServerInterface(Interface):
|
|||
def processOutgoing(self, data):
|
||||
pass
|
||||
|
||||
def received_announce(self, from_spawned=False):
|
||||
if from_spawned: self.ia_freq_deque.append(time.time())
|
||||
|
||||
def sent_announce(self, from_spawned=False):
|
||||
if from_spawned: self.oa_freq_deque.append(time.time())
|
||||
|
||||
def __str__(self):
|
||||
return "Shared Instance["+str(self.bind_port)+"]"
|
||||
|
||||
|
|
|
@ -54,8 +54,7 @@ class PipeInterface(Interface):
|
|||
if respawn_delay == None:
|
||||
respawn_delay = 5
|
||||
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -96,7 +95,7 @@ class PipeInterface(Interface):
|
|||
def configure_pipe(self):
|
||||
sleep(0.01)
|
||||
thread = threading.Thread(target=self.readLoop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
self.online = True
|
||||
RNS.log("Subprocess pipe for "+str(self)+" is now connected", RNS.LOG_VERBOSE)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -43,14 +43,23 @@ class KISS():
|
|||
CMD_CR = 0x05
|
||||
CMD_RADIO_STATE = 0x06
|
||||
CMD_RADIO_LOCK = 0x07
|
||||
CMD_ST_ALOCK = 0x0B
|
||||
CMD_LT_ALOCK = 0x0C
|
||||
CMD_DETECT = 0x08
|
||||
CMD_LEAVE = 0x0A
|
||||
CMD_READY = 0x0F
|
||||
CMD_STAT_RX = 0x21
|
||||
CMD_STAT_TX = 0x22
|
||||
CMD_STAT_RSSI = 0x23
|
||||
CMD_STAT_SNR = 0x24
|
||||
CMD_STAT_CHTM = 0x25
|
||||
CMD_STAT_PHYPRM = 0x26
|
||||
CMD_BLINK = 0x30
|
||||
CMD_RANDOM = 0x40
|
||||
CMD_FB_EXT = 0x41
|
||||
CMD_FB_READ = 0x42
|
||||
CMD_FB_WRITE = 0x43
|
||||
CMD_BT_CTRL = 0x46
|
||||
CMD_PLATFORM = 0x48
|
||||
CMD_MCU = 0x49
|
||||
CMD_FW_VERSION = 0x50
|
||||
|
@ -82,25 +91,26 @@ class KISS():
|
|||
class RNodeInterface(Interface):
|
||||
MAX_CHUNK = 32768
|
||||
|
||||
owner = None
|
||||
port = None
|
||||
speed = None
|
||||
databits = None
|
||||
parity = None
|
||||
stopbits = None
|
||||
serial = None
|
||||
|
||||
FREQ_MIN = 137000000
|
||||
FREQ_MAX = 1020000000
|
||||
FREQ_MAX = 3000000000
|
||||
|
||||
RSSI_OFFSET = 157
|
||||
|
||||
CALLSIGN_MAX_LEN = 32
|
||||
|
||||
REQUIRED_FW_VER_MAJ = 1
|
||||
REQUIRED_FW_VER_MIN = 26
|
||||
REQUIRED_FW_VER_MIN = 52
|
||||
|
||||
RECONNECT_WAIT = 5
|
||||
|
||||
Q_SNR_MIN_BASE = -9
|
||||
Q_SNR_MAX = 6
|
||||
Q_SNR_STEP = 2
|
||||
|
||||
def __init__(self, owner, name, port, frequency = None, bandwidth = None, txpower = None, sf = None, cr = None, flow_control = False, id_interval = None, id_callsign = None, st_alock = None, lt_alock = None):
|
||||
if RNS.vendor.platformutils.is_android():
|
||||
raise SystemError("Invalid interface type. The Android-specific RNode interface must be used on Android")
|
||||
|
||||
def __init__(self, owner, name, port, frequency = None, bandwidth = None, txpower = None, sf = None, cr = None, flow_control = False, id_interval = None, id_callsign = None):
|
||||
import importlib
|
||||
if importlib.util.find_spec('serial') != None:
|
||||
import serial
|
||||
|
@ -109,8 +119,7 @@ class RNodeInterface(Interface):
|
|||
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 508
|
||||
|
||||
|
@ -121,10 +130,11 @@ class RNodeInterface(Interface):
|
|||
self.port = port
|
||||
self.speed = 115200
|
||||
self.databits = 8
|
||||
self.parity = serial.PARITY_NONE
|
||||
self.stopbits = 1
|
||||
self.timeout = 100
|
||||
self.online = False
|
||||
self.detached = False
|
||||
self.reconnecting= False
|
||||
|
||||
self.frequency = frequency
|
||||
self.bandwidth = bandwidth
|
||||
|
@ -133,7 +143,10 @@ class RNodeInterface(Interface):
|
|||
self.cr = cr
|
||||
self.state = KISS.RADIO_STATE_OFF
|
||||
self.bitrate = 0
|
||||
self.st_alock = st_alock
|
||||
self.lt_alock = lt_alock
|
||||
self.platform = None
|
||||
self.display = None
|
||||
self.mcu = None
|
||||
self.detected = False
|
||||
self.firmware_ok = False
|
||||
|
@ -142,6 +155,7 @@ class RNodeInterface(Interface):
|
|||
|
||||
self.last_id = 0
|
||||
self.first_tx = None
|
||||
self.reconnect_w = RNodeInterface.RECONNECT_WAIT
|
||||
|
||||
self.r_frequency = None
|
||||
self.r_bandwidth = None
|
||||
|
@ -153,26 +167,38 @@ class RNodeInterface(Interface):
|
|||
self.r_stat_rx = None
|
||||
self.r_stat_tx = None
|
||||
self.r_stat_rssi = None
|
||||
self.r_stat_snr = None
|
||||
self.r_st_alock = None
|
||||
self.r_lt_alock = None
|
||||
self.r_random = None
|
||||
self.r_airtime_short = 0.0
|
||||
self.r_airtime_long = 0.0
|
||||
self.r_channel_load_short = 0.0
|
||||
self.r_channel_load_long = 0.0
|
||||
self.r_symbol_time_ms = None
|
||||
self.r_symbol_rate = None
|
||||
self.r_preamble_symbols = None
|
||||
self.r_premable_time_ms = None
|
||||
|
||||
self.packet_queue = []
|
||||
self.flow_control = flow_control
|
||||
self.interface_ready = False
|
||||
self.announce_rate_target = None
|
||||
|
||||
self.validcfg = True
|
||||
if (self.frequency < RNodeInterface.FREQ_MIN or self.frequency > RNodeInterface.FREQ_MAX):
|
||||
RNS.log("Invalid frequency configured for "+str(self), RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
|
||||
if (self.txpower < 0 or self.txpower > 17):
|
||||
if (self.txpower < 0 or self.txpower > 22):
|
||||
RNS.log("Invalid TX power configured for "+str(self), RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
|
||||
if (self.bandwidth < 7800 or self.bandwidth > 500000):
|
||||
if (self.bandwidth < 7800 or self.bandwidth > 1625000):
|
||||
RNS.log("Invalid bandwidth configured for "+str(self), RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
|
||||
if (self.sf < 7 or self.sf > 12):
|
||||
if (self.sf < 5 or self.sf > 12):
|
||||
RNS.log("Invalid spreading factor configured for "+str(self), RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
|
||||
|
@ -180,6 +206,14 @@ class RNodeInterface(Interface):
|
|||
RNS.log("Invalid coding rate configured for "+str(self), RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
|
||||
if (self.st_alock and (self.st_alock < 0.0 or self.st_alock > 100.0)):
|
||||
RNS.log("Invalid short-term airtime limit configured for "+str(self), RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
|
||||
if (self.lt_alock and (self.lt_alock < 0.0 or self.lt_alock > 100.0)):
|
||||
RNS.log("Invalid long-term airtime limit configured for "+str(self), RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
|
||||
if id_interval != None and id_callsign != None:
|
||||
if (len(id_callsign.encode("utf-8")) <= RNodeInterface.CALLSIGN_MAX_LEN):
|
||||
self.should_id = True
|
||||
|
@ -197,14 +231,21 @@ class RNodeInterface(Interface):
|
|||
|
||||
try:
|
||||
self.open_port()
|
||||
|
||||
if self.serial.is_open:
|
||||
self.configure_device()
|
||||
else:
|
||||
raise IOError("Could not open serial port")
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Could not open serial port for interface "+str(self), RNS.LOG_ERROR)
|
||||
raise e
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
RNS.log("Reticulum will attempt to bring up this interface periodically", RNS.LOG_ERROR)
|
||||
if not self.detached and not self.reconnecting:
|
||||
thread = threading.Thread(target=self.reconnect_port)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
if self.serial.is_open:
|
||||
self.configure_device()
|
||||
else:
|
||||
raise IOError("Could not open serial port")
|
||||
|
||||
def open_port(self):
|
||||
RNS.log("Opening serial port "+self.port+"...")
|
||||
|
@ -212,7 +253,7 @@ class RNodeInterface(Interface):
|
|||
port = self.port,
|
||||
baudrate = self.speed,
|
||||
bytesize = self.databits,
|
||||
parity = self.parity,
|
||||
parity = self.pyserial.PARITY_NONE,
|
||||
stopbits = self.stopbits,
|
||||
xonxoff = False,
|
||||
rtscts = False,
|
||||
|
@ -224,35 +265,42 @@ class RNodeInterface(Interface):
|
|||
|
||||
|
||||
def configure_device(self):
|
||||
self.r_frequency = None
|
||||
self.r_bandwidth = None
|
||||
self.r_txpower = None
|
||||
self.r_sf = None
|
||||
self.r_cr = None
|
||||
self.r_state = None
|
||||
self.r_lock = None
|
||||
sleep(2.0)
|
||||
|
||||
thread = threading.Thread(target=self.readLoop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
self.detect()
|
||||
sleep(0.1)
|
||||
sleep(0.2)
|
||||
|
||||
if not self.detected:
|
||||
raise IOError("Could not detect device")
|
||||
RNS.log("Could not detect device for "+str(self), RNS.LOG_ERROR)
|
||||
self.serial.close()
|
||||
else:
|
||||
if self.platform == KISS.PLATFORM_ESP32:
|
||||
RNS.log("Resetting ESP32-based device before configuration...", RNS.LOG_VERBOSE)
|
||||
self.hard_reset()
|
||||
self.display = True
|
||||
|
||||
self.online = True
|
||||
RNS.log("Serial port "+self.port+" is now open")
|
||||
RNS.log("Configuring RNode interface...", RNS.LOG_VERBOSE)
|
||||
self.initRadio()
|
||||
if (self.validateRadioState()):
|
||||
self.interface_ready = True
|
||||
RNS.log(str(self)+" is configured and powered up")
|
||||
sleep(1.0)
|
||||
sleep(0.3)
|
||||
self.online = True
|
||||
else:
|
||||
RNS.log("After configuring "+str(self)+", the reported radio parameters did not match your configuration.", RNS.LOG_ERROR)
|
||||
RNS.log("Make sure that your hardware actually supports the parameters specified in the configuration", RNS.LOG_ERROR)
|
||||
RNS.log("Aborting RNode startup", RNS.LOG_ERROR)
|
||||
self.serial.close()
|
||||
raise IOError("RNode interface did not pass configuration validation")
|
||||
|
||||
|
||||
def initRadio(self):
|
||||
|
@ -261,13 +309,59 @@ class RNodeInterface(Interface):
|
|||
self.setTXPower()
|
||||
self.setSpreadingFactor()
|
||||
self.setCodingRate()
|
||||
self.setSTALock()
|
||||
self.setLTALock()
|
||||
self.setRadioState(KISS.RADIO_STATE_ON)
|
||||
|
||||
def detect(self):
|
||||
kiss_command = bytes([KISS.FEND, KISS.CMD_DETECT, KISS.DETECT_REQ, KISS.FEND, KISS.CMD_FW_VERSION, 0x00, KISS.FEND, KISS.CMD_PLATFORM, 0x00, KISS.FEND, KISS.CMD_MCU, 0x00, KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while detecting hardware for "+self(str))
|
||||
raise IOError("An IO error occurred while detecting hardware for "+str(self))
|
||||
|
||||
def leave(self):
|
||||
kiss_command = bytes([KISS.FEND, KISS.CMD_LEAVE, 0xFF, KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while sending host left command to device")
|
||||
|
||||
def enable_external_framebuffer(self):
|
||||
if self.display != None:
|
||||
kiss_command = bytes([KISS.FEND, KISS.CMD_FB_EXT, 0x01, KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while enabling external framebuffer on device")
|
||||
|
||||
def disable_external_framebuffer(self):
|
||||
if self.display != None:
|
||||
kiss_command = bytes([KISS.FEND, KISS.CMD_FB_EXT, 0x00, KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while disabling external framebuffer on device")
|
||||
|
||||
FB_PIXEL_WIDTH = 64
|
||||
FB_BITS_PER_PIXEL = 1
|
||||
FB_PIXELS_PER_BYTE = 8//FB_BITS_PER_PIXEL
|
||||
FB_BYTES_PER_LINE = FB_PIXEL_WIDTH//FB_PIXELS_PER_BYTE
|
||||
def display_image(self, imagedata):
|
||||
if self.display != None:
|
||||
lines = len(imagedata)//8
|
||||
for line in range(lines):
|
||||
line_start = line*RNodeInterface.FB_BYTES_PER_LINE
|
||||
line_end = line_start+RNodeInterface.FB_BYTES_PER_LINE
|
||||
line_data = bytes(imagedata[line_start:line_end])
|
||||
self.write_framebuffer(line, line_data)
|
||||
|
||||
def write_framebuffer(self, line, line_data):
|
||||
if self.display != None:
|
||||
line_byte = line.to_bytes(1, byteorder="big", signed=False)
|
||||
data = line_byte+line_data
|
||||
escaped_data = KISS.escape(data)
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_FB_WRITE])+escaped_data+bytes([KISS.FEND])
|
||||
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while writing framebuffer data device")
|
||||
|
||||
def hard_reset(self):
|
||||
kiss_command = bytes([KISS.FEND, KISS.CMD_RESET, 0xf8, KISS.FEND])
|
||||
|
@ -286,7 +380,7 @@ class RNodeInterface(Interface):
|
|||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_FREQUENCY])+data+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring frequency for "+self(str))
|
||||
raise IOError("An IO error occurred while configuring frequency for "+str(self))
|
||||
|
||||
def setBandwidth(self):
|
||||
c1 = self.bandwidth >> 24
|
||||
|
@ -298,35 +392,59 @@ class RNodeInterface(Interface):
|
|||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_BANDWIDTH])+data+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring bandwidth for "+self(str))
|
||||
raise IOError("An IO error occurred while configuring bandwidth for "+str(self))
|
||||
|
||||
def setTXPower(self):
|
||||
txp = bytes([self.txpower])
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_TXPOWER])+txp+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring TX power for "+self(str))
|
||||
raise IOError("An IO error occurred while configuring TX power for "+str(self))
|
||||
|
||||
def setSpreadingFactor(self):
|
||||
sf = bytes([self.sf])
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_SF])+sf+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring spreading factor for "+self(str))
|
||||
raise IOError("An IO error occurred while configuring spreading factor for "+str(self))
|
||||
|
||||
def setCodingRate(self):
|
||||
cr = bytes([self.cr])
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_CR])+cr+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring coding rate for "+self(str))
|
||||
raise IOError("An IO error occurred while configuring coding rate for "+str(self))
|
||||
|
||||
def setSTALock(self):
|
||||
if self.st_alock != None:
|
||||
at = int(self.st_alock*100)
|
||||
c1 = at >> 8 & 0xFF
|
||||
c2 = at & 0xFF
|
||||
data = KISS.escape(bytes([c1])+bytes([c2]))
|
||||
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_ST_ALOCK])+data+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring short-term airtime limit for "+str(self))
|
||||
|
||||
def setLTALock(self):
|
||||
if self.lt_alock != None:
|
||||
at = int(self.lt_alock*100)
|
||||
c1 = at >> 8 & 0xFF
|
||||
c2 = at & 0xFF
|
||||
data = KISS.escape(bytes([c1])+bytes([c2]))
|
||||
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_LT_ALOCK])+data+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring long-term airtime limit for "+str(self))
|
||||
|
||||
def setRadioState(self, state):
|
||||
self.state = state
|
||||
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_RADIO_STATE])+bytes([state])+bytes([KISS.FEND])
|
||||
written = self.serial.write(kiss_command)
|
||||
if written != len(kiss_command):
|
||||
raise IOError("An IO error occurred while configuring radio state for "+self(str))
|
||||
raise IOError("An IO error occurred while configuring radio state for "+str(self))
|
||||
|
||||
def validate_firmware(self):
|
||||
if (self.maj_version >= RNodeInterface.REQUIRED_FW_VER_MAJ):
|
||||
|
@ -338,14 +456,16 @@ class RNodeInterface(Interface):
|
|||
|
||||
RNS.log("The firmware version of the connected RNode is "+str(self.maj_version)+"."+str(self.min_version), RNS.LOG_ERROR)
|
||||
RNS.log("This version of Reticulum requires at least version "+str(RNodeInterface.REQUIRED_FW_VER_MAJ)+"."+str(RNodeInterface.REQUIRED_FW_VER_MIN), RNS.LOG_ERROR)
|
||||
RNS.log("Please update your RNode firmware with rnodeconf (https://github.com/markqvist/rnodeconfigutil/)")
|
||||
RNS.log("Please update your RNode firmware with rnodeconf from https://github.com/markqvist/rnodeconfigutil/")
|
||||
RNS.panic()
|
||||
|
||||
|
||||
def validateRadioState(self):
|
||||
RNS.log("Wating for radio configuration validation for "+str(self)+"...", RNS.LOG_VERBOSE)
|
||||
RNS.log("Waiting for radio configuration validation for "+str(self)+"...", RNS.LOG_VERBOSE)
|
||||
sleep(0.25);
|
||||
if (self.frequency != self.r_frequency):
|
||||
|
||||
self.validcfg = True
|
||||
if (self.r_frequency != None and abs(self.frequency - int(self.r_frequency)) > 100):
|
||||
RNS.log("Frequency mismatch", RNS.LOG_ERROR)
|
||||
self.validcfg = False
|
||||
if (self.bandwidth != self.r_bandwidth):
|
||||
|
@ -376,7 +496,7 @@ class RNodeInterface(Interface):
|
|||
self.bitrate = 0
|
||||
|
||||
def processIncoming(self, data):
|
||||
self.rxb += len(data)
|
||||
self.rxb += len(data)
|
||||
self.owner.inbound(data, self)
|
||||
self.r_stat_rssi = None
|
||||
self.r_stat_snr = None
|
||||
|
@ -556,6 +676,96 @@ class RNodeInterface(Interface):
|
|||
self.r_stat_rssi = byte-RNodeInterface.RSSI_OFFSET
|
||||
elif (command == KISS.CMD_STAT_SNR):
|
||||
self.r_stat_snr = int.from_bytes(bytes([byte]), byteorder="big", signed=True) * 0.25
|
||||
try:
|
||||
sfs = self.r_sf-7
|
||||
snr = self.r_stat_snr
|
||||
q_snr_min = RNodeInterface.Q_SNR_MIN_BASE-sfs*RNodeInterface.Q_SNR_STEP
|
||||
q_snr_max = RNodeInterface.Q_SNR_MAX
|
||||
q_snr_span = q_snr_max-q_snr_min
|
||||
quality = round(((snr-q_snr_min)/(q_snr_span))*100,1)
|
||||
if quality > 100.0: quality = 100.0
|
||||
if quality < 0.0: quality = 0.0
|
||||
self.r_stat_q = quality
|
||||
except:
|
||||
pass
|
||||
elif (command == KISS.CMD_ST_ALOCK):
|
||||
if (byte == KISS.FESC):
|
||||
escape = True
|
||||
else:
|
||||
if (escape):
|
||||
if (byte == KISS.TFEND):
|
||||
byte = KISS.FEND
|
||||
if (byte == KISS.TFESC):
|
||||
byte = KISS.FESC
|
||||
escape = False
|
||||
command_buffer = command_buffer+bytes([byte])
|
||||
if (len(command_buffer) == 2):
|
||||
at = command_buffer[0] << 8 | command_buffer[1]
|
||||
self.r_st_alock = at/100.0
|
||||
RNS.log(str(self)+" Radio reporting short-term airtime limit is "+str(self.r_st_alock)+"%", RNS.LOG_DEBUG)
|
||||
elif (command == KISS.CMD_LT_ALOCK):
|
||||
if (byte == KISS.FESC):
|
||||
escape = True
|
||||
else:
|
||||
if (escape):
|
||||
if (byte == KISS.TFEND):
|
||||
byte = KISS.FEND
|
||||
if (byte == KISS.TFESC):
|
||||
byte = KISS.FESC
|
||||
escape = False
|
||||
command_buffer = command_buffer+bytes([byte])
|
||||
if (len(command_buffer) == 2):
|
||||
at = command_buffer[0] << 8 | command_buffer[1]
|
||||
self.r_lt_alock = at/100.0
|
||||
RNS.log(str(self)+" Radio reporting long-term airtime limit is "+str(self.r_lt_alock)+"%", RNS.LOG_DEBUG)
|
||||
elif (command == KISS.CMD_STAT_CHTM):
|
||||
if (byte == KISS.FESC):
|
||||
escape = True
|
||||
else:
|
||||
if (escape):
|
||||
if (byte == KISS.TFEND):
|
||||
byte = KISS.FEND
|
||||
if (byte == KISS.TFESC):
|
||||
byte = KISS.FESC
|
||||
escape = False
|
||||
command_buffer = command_buffer+bytes([byte])
|
||||
if (len(command_buffer) == 8):
|
||||
ats = command_buffer[0] << 8 | command_buffer[1]
|
||||
atl = command_buffer[2] << 8 | command_buffer[3]
|
||||
cus = command_buffer[4] << 8 | command_buffer[5]
|
||||
cul = command_buffer[6] << 8 | command_buffer[7]
|
||||
|
||||
self.r_airtime_short = ats/100.0
|
||||
self.r_airtime_long = atl/100.0
|
||||
self.r_channel_load_short = cus/100.0
|
||||
self.r_channel_load_long = cul/100.0
|
||||
elif (command == KISS.CMD_STAT_PHYPRM):
|
||||
if (byte == KISS.FESC):
|
||||
escape = True
|
||||
else:
|
||||
if (escape):
|
||||
if (byte == KISS.TFEND):
|
||||
byte = KISS.FEND
|
||||
if (byte == KISS.TFESC):
|
||||
byte = KISS.FESC
|
||||
escape = False
|
||||
command_buffer = command_buffer+bytes([byte])
|
||||
if (len(command_buffer) == 10):
|
||||
lst = (command_buffer[0] << 8 | command_buffer[1])/1000.0
|
||||
lsr = command_buffer[2] << 8 | command_buffer[3]
|
||||
prs = command_buffer[4] << 8 | command_buffer[5]
|
||||
prt = command_buffer[6] << 8 | command_buffer[7]
|
||||
cst = command_buffer[8] << 8 | command_buffer[9]
|
||||
|
||||
if lst != self.r_symbol_time_ms or lsr != self.r_symbol_rate or prs != self.r_preamble_symbols or prt != self.r_premable_time_ms or cst != self.r_csma_slot_time_ms:
|
||||
self.r_symbol_time_ms = lst
|
||||
self.r_symbol_rate = lsr
|
||||
self.r_preamble_symbols = prs
|
||||
self.r_premable_time_ms = prt
|
||||
self.r_csma_slot_time_ms = cst
|
||||
RNS.log(str(self)+" Radio reporting symbol time is "+str(round(self.r_symbol_time_ms,2))+"ms (at "+str(self.r_symbol_rate)+" baud)", RNS.LOG_DEBUG)
|
||||
RNS.log(str(self)+" Radio reporting preamble is "+str(self.r_preamble_symbols)+" symbols ("+str(self.r_premable_time_ms)+"ms)", RNS.LOG_DEBUG)
|
||||
RNS.log(str(self)+" Radio reporting CSMA slot time is "+str(self.r_csma_slot_time_ms)+"ms", RNS.LOG_DEBUG)
|
||||
elif (command == KISS.CMD_RANDOM):
|
||||
self.r_random = byte
|
||||
elif (command == KISS.CMD_PLATFORM):
|
||||
|
@ -566,7 +776,7 @@ class RNodeInterface(Interface):
|
|||
if (byte == KISS.ERROR_INITRADIO):
|
||||
RNS.log(str(self)+" hardware initialisation error (code "+RNS.hexrep(byte)+")", RNS.LOG_ERROR)
|
||||
raise IOError("Radio initialisation failure")
|
||||
elif (byte == KISS.ERROR_INITRADIO):
|
||||
elif (byte == KISS.ERROR_TXFAILED):
|
||||
RNS.log(str(self)+" hardware TX error (code "+RNS.hexrep(byte)+")", RNS.LOG_ERROR)
|
||||
raise IOError("Hardware transmit failure")
|
||||
else:
|
||||
|
@ -589,7 +799,7 @@ class RNodeInterface(Interface):
|
|||
else:
|
||||
time_since_last = int(time.time()*1000) - last_read_ms
|
||||
if len(data_buffer) > 0 and time_since_last > self.timeout:
|
||||
RNS.log(str(self)+" serial read timeout", RNS.LOG_DEBUG)
|
||||
RNS.log(str(self)+" serial read timeout in command "+str(command), RNS.LOG_WARNING)
|
||||
data_buffer = b""
|
||||
in_frame = False
|
||||
command = KISS.CMD_UNKNOWN
|
||||
|
@ -614,13 +824,19 @@ class RNodeInterface(Interface):
|
|||
RNS.log("Reticulum will attempt to reconnect the interface periodically.", RNS.LOG_ERROR)
|
||||
|
||||
self.online = False
|
||||
self.serial.close()
|
||||
self.reconnect_port()
|
||||
try:
|
||||
self.serial.close()
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
if not self.detached and not self.reconnecting:
|
||||
self.reconnect_port()
|
||||
|
||||
def reconnect_port(self):
|
||||
while not self.online:
|
||||
self.reconnecting = True
|
||||
while not self.online and not self.detached:
|
||||
try:
|
||||
time.sleep(3.5)
|
||||
time.sleep(5)
|
||||
RNS.log("Attempting to reconnect serial port "+str(self.port)+" for "+str(self)+"...", RNS.LOG_VERBOSE)
|
||||
self.open_port()
|
||||
if self.serial.is_open:
|
||||
|
@ -628,8 +844,18 @@ class RNodeInterface(Interface):
|
|||
except Exception as e:
|
||||
RNS.log("Error while reconnecting port, the contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
RNS.log("Reconnected serial port for "+str(self))
|
||||
self.reconnecting = False
|
||||
if self.online:
|
||||
RNS.log("Reconnected serial port for "+str(self))
|
||||
|
||||
def detach(self):
|
||||
self.detached = True
|
||||
self.disable_external_framebuffer()
|
||||
self.setRadioState(KISS.RADIO_STATE_OFF)
|
||||
self.leave()
|
||||
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "RNodeInterface["+str(self.name)+"]"
|
||||
|
||||
|
|
|
@ -60,8 +60,7 @@ class SerialInterface(Interface):
|
|||
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 564
|
||||
|
||||
|
@ -116,7 +115,7 @@ class SerialInterface(Interface):
|
|||
def configure_device(self):
|
||||
sleep(0.5)
|
||||
thread = threading.Thread(target=self.readLoop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
self.online = True
|
||||
RNS.log("Serial port "+self.port+" is now open", RNS.LOG_VERBOSE)
|
||||
|
@ -201,5 +200,8 @@ class SerialInterface(Interface):
|
|||
|
||||
RNS.log("Reconnected serial port for "+str(self))
|
||||
|
||||
def should_ingress_limit(self):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "SerialInterface["+self.name+"]"
|
||||
|
|
|
@ -65,19 +65,21 @@ class TCPClientInterface(Interface):
|
|||
RECONNECT_MAX_TRIES = None
|
||||
|
||||
# TCP socket options
|
||||
TCP_USER_TIMEOUT = 20
|
||||
TCP_USER_TIMEOUT = 24
|
||||
TCP_PROBE_AFTER = 5
|
||||
TCP_PROBE_INTERVAL = 3
|
||||
TCP_PROBES = 5
|
||||
TCP_PROBE_INTERVAL = 2
|
||||
TCP_PROBES = 12
|
||||
|
||||
I2P_USER_TIMEOUT = 40
|
||||
INITIAL_CONNECT_TIMEOUT = 5
|
||||
SYNCHRONOUS_START = True
|
||||
|
||||
I2P_USER_TIMEOUT = 45
|
||||
I2P_PROBE_AFTER = 10
|
||||
I2P_PROBE_INTERVAL = 5
|
||||
I2P_PROBES = 6
|
||||
I2P_PROBE_INTERVAL = 9
|
||||
I2P_PROBES = 5
|
||||
|
||||
def __init__(self, owner, name, target_ip=None, target_port=None, connected_socket=None, max_reconnect_tries=None, kiss_framing=False, i2p_tunneled = False):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
def __init__(self, owner, name, target_ip=None, target_port=None, connected_socket=None, max_reconnect_tries=None, kiss_framing=False, i2p_tunneled = False, connect_timeout = None):
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -114,23 +116,37 @@ class TCPClientInterface(Interface):
|
|||
elif platform.system() == "Darwin":
|
||||
self.set_timeouts_osx()
|
||||
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
|
||||
|
||||
elif target_ip != None and target_port != None:
|
||||
self.receives = True
|
||||
self.target_ip = target_ip
|
||||
self.target_port = target_port
|
||||
self.initiator = True
|
||||
|
||||
if not self.connect(initial=True):
|
||||
thread = threading.Thread(target=self.reconnect)
|
||||
thread.setDaemon(True)
|
||||
thread.start()
|
||||
else:
|
||||
thread = threading.Thread(target=self.read_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.start()
|
||||
if not self.kiss_framing:
|
||||
self.wants_tunnel = True
|
||||
|
||||
if connect_timeout != None:
|
||||
self.connect_timeout = connect_timeout
|
||||
else:
|
||||
self.connect_timeout = TCPClientInterface.INITIAL_CONNECT_TIMEOUT
|
||||
|
||||
if TCPClientInterface.SYNCHRONOUS_START:
|
||||
self.initial_connect()
|
||||
else:
|
||||
thread = threading.Thread(target=self.initial_connect)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
def initial_connect(self):
|
||||
if not self.connect(initial=True):
|
||||
thread = threading.Thread(target=self.reconnect)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
else:
|
||||
thread = threading.Thread(target=self.read_loop)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
if not self.kiss_framing:
|
||||
self.wants_tunnel = True
|
||||
|
||||
def set_timeouts_linux(self):
|
||||
if not self.i2p_tunneled:
|
||||
|
@ -139,6 +155,7 @@ class TCPClientInterface(Interface):
|
|||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(TCPClientInterface.TCP_PROBE_AFTER))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(TCPClientInterface.TCP_PROBE_INTERVAL))
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(TCPClientInterface.TCP_PROBES))
|
||||
|
||||
else:
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(TCPClientInterface.I2P_USER_TIMEOUT * 1000))
|
||||
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
|
@ -180,9 +197,18 @@ class TCPClientInterface(Interface):
|
|||
|
||||
def connect(self, initial=False):
|
||||
try:
|
||||
if initial:
|
||||
RNS.log("Establishing TCP connection for "+str(self)+"...", RNS.LOG_DEBUG)
|
||||
|
||||
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.socket.settimeout(TCPClientInterface.INITIAL_CONNECT_TIMEOUT)
|
||||
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
|
||||
self.socket.connect((self.target_ip, self.target_port))
|
||||
self.socket.settimeout(None)
|
||||
self.online = True
|
||||
|
||||
if initial:
|
||||
RNS.log("TCP connection for "+str(self)+" established", RNS.LOG_DEBUG)
|
||||
|
||||
except Exception as e:
|
||||
if initial:
|
||||
|
@ -226,11 +252,11 @@ class TCPClientInterface(Interface):
|
|||
RNS.log("Connection attempt for "+str(self)+" failed: "+str(e), RNS.LOG_DEBUG)
|
||||
|
||||
if not self.never_connected:
|
||||
RNS.log("Reconnected TCP socket for "+str(self)+".", RNS.LOG_INFO)
|
||||
RNS.log("Reconnected socket for "+str(self)+".", RNS.LOG_INFO)
|
||||
|
||||
self.reconnecting = False
|
||||
thread = threading.Thread(target=self.read_loop)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
if not self.kiss_framing:
|
||||
RNS.Transport.synthesize_tunnel(self)
|
||||
|
@ -248,8 +274,8 @@ class TCPClientInterface(Interface):
|
|||
|
||||
def processOutgoing(self, data):
|
||||
if self.online:
|
||||
while self.writing:
|
||||
time.sleep(0.01)
|
||||
# while self.writing:
|
||||
# time.sleep(0.01)
|
||||
|
||||
try:
|
||||
self.writing = True
|
||||
|
@ -335,10 +361,10 @@ class TCPClientInterface(Interface):
|
|||
else:
|
||||
self.online = False
|
||||
if self.initiator and not self.detached:
|
||||
RNS.log("TCP socket for "+str(self)+" was closed, attempting to reconnect...", RNS.LOG_WARNING)
|
||||
RNS.log("The socket for "+str(self)+" was closed, attempting to reconnect...", RNS.LOG_WARNING)
|
||||
self.reconnect()
|
||||
else:
|
||||
RNS.log("TCP socket for remote client "+str(self)+" was closed.", RNS.LOG_VERBOSE)
|
||||
RNS.log("The socket for remote client "+str(self)+" was closed.", RNS.LOG_VERBOSE)
|
||||
self.teardown()
|
||||
|
||||
break
|
||||
|
@ -384,29 +410,18 @@ class TCPServerInterface(Interface):
|
|||
|
||||
@staticmethod
|
||||
def get_address_for_if(name):
|
||||
import importlib
|
||||
if importlib.util.find_spec('netifaces') != None:
|
||||
import netifaces
|
||||
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['addr']
|
||||
else:
|
||||
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
|
||||
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
import RNS.vendor.ifaddr.niwrapper as netinfo
|
||||
ifaddr = netinfo.ifaddresses(name)
|
||||
return ifaddr[netinfo.AF_INET][0]["addr"]
|
||||
|
||||
@staticmethod
|
||||
def get_broadcast_for_if(name):
|
||||
import importlib
|
||||
if importlib.util.find_spec('netifaces') != None:
|
||||
import netifaces
|
||||
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['broadcast']
|
||||
else:
|
||||
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
|
||||
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
import RNS.vendor.ifaddr.niwrapper as netinfo
|
||||
ifaddr = netinfo.ifaddresses(name)
|
||||
return ifaddr[netinfo.AF_INET][0]["broadcast"]
|
||||
|
||||
def __init__(self, owner, name, device=None, bindip=None, bindport=None, i2p_tunneled=False):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -416,6 +431,7 @@ class TCPServerInterface(Interface):
|
|||
self.IN = True
|
||||
self.OUT = False
|
||||
self.name = name
|
||||
self.detached = False
|
||||
|
||||
self.i2p_tunneled = i2p_tunneled
|
||||
self.mode = RNS.Interfaces.Interface.Interface.MODE_FULL
|
||||
|
@ -442,7 +458,7 @@ class TCPServerInterface(Interface):
|
|||
self.bitrate = TCPServerInterface.BITRATE_GUESS
|
||||
|
||||
thread = threading.Thread(target=self.server.serve_forever)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
self.online = True
|
||||
|
@ -458,9 +474,27 @@ class TCPServerInterface(Interface):
|
|||
spawned_interface.target_port = str(handler.client_address[1])
|
||||
spawned_interface.parent_interface = self
|
||||
spawned_interface.bitrate = self.bitrate
|
||||
|
||||
spawned_interface.ifac_size = self.ifac_size
|
||||
spawned_interface.ifac_netname = self.ifac_netname
|
||||
spawned_interface.ifac_netkey = self.ifac_netkey
|
||||
if spawned_interface.ifac_netname != None or spawned_interface.ifac_netkey != None:
|
||||
ifac_origin = b""
|
||||
if spawned_interface.ifac_netname != None:
|
||||
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netname.encode("utf-8"))
|
||||
if spawned_interface.ifac_netkey != None:
|
||||
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netkey.encode("utf-8"))
|
||||
|
||||
ifac_origin_hash = RNS.Identity.full_hash(ifac_origin)
|
||||
spawned_interface.ifac_key = RNS.Cryptography.hkdf(
|
||||
length=64,
|
||||
derive_from=ifac_origin_hash,
|
||||
salt=RNS.Reticulum.IFAC_SALT,
|
||||
context=None
|
||||
)
|
||||
spawned_interface.ifac_identity = RNS.Identity.from_bytes(spawned_interface.ifac_key)
|
||||
spawned_interface.ifac_signature = spawned_interface.ifac_identity.sign(RNS.Identity.full_hash(spawned_interface.ifac_key))
|
||||
|
||||
spawned_interface.announce_rate_target = self.announce_rate_target
|
||||
spawned_interface.announce_rate_grace = self.announce_rate_grace
|
||||
spawned_interface.announce_rate_penalty = self.announce_rate_penalty
|
||||
|
@ -472,12 +506,34 @@ class TCPServerInterface(Interface):
|
|||
self.clients += 1
|
||||
spawned_interface.read_loop()
|
||||
|
||||
def received_announce(self, from_spawned=False):
|
||||
if from_spawned: self.ia_freq_deque.append(time.time())
|
||||
|
||||
def sent_announce(self, from_spawned=False):
|
||||
if from_spawned: self.oa_freq_deque.append(time.time())
|
||||
|
||||
def processOutgoing(self, data):
|
||||
pass
|
||||
|
||||
|
||||
def detach(self):
|
||||
if self.server != None:
|
||||
if hasattr(self.server, "shutdown"):
|
||||
if callable(self.server.shutdown):
|
||||
try:
|
||||
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
|
||||
self.server.shutdown()
|
||||
self.detached = True
|
||||
self.server = None
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Error while shutting down server for "+str(self)+": "+str(e))
|
||||
|
||||
|
||||
def __str__(self):
|
||||
return "TCPServerInterface["+self.name+"/"+self.bind_ip+":"+str(self.bind_port)+"]"
|
||||
|
||||
|
||||
class TCPInterfaceHandler(socketserver.BaseRequestHandler):
|
||||
def __init__(self, callback, *args, **keys):
|
||||
self.callback = callback
|
||||
|
|
|
@ -34,29 +34,18 @@ class UDPInterface(Interface):
|
|||
|
||||
@staticmethod
|
||||
def get_address_for_if(name):
|
||||
import importlib
|
||||
if importlib.util.find_spec('netifaces') != None:
|
||||
import netifaces
|
||||
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['addr']
|
||||
else:
|
||||
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
|
||||
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
import RNS.vendor.ifaddr.niwrapper as netinfo
|
||||
ifaddr = netinfo.ifaddresses(name)
|
||||
return ifaddr[netinfo.AF_INET][0]["addr"]
|
||||
|
||||
@staticmethod
|
||||
def get_broadcast_for_if(name):
|
||||
import importlib
|
||||
if importlib.util.find_spec('netifaces') != None:
|
||||
import netifaces
|
||||
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['broadcast']
|
||||
else:
|
||||
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
|
||||
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
|
||||
RNS.panic()
|
||||
import RNS.vendor.ifaddr.niwrapper as netinfo
|
||||
ifaddr = netinfo.ifaddresses(name)
|
||||
return ifaddr[netinfo.AF_INET][0]["broadcast"]
|
||||
|
||||
def __init__(self, owner, name, device=None, bindip=None, bindport=None, forwardip=None, forwardport=None):
|
||||
self.rxb = 0
|
||||
self.txb = 0
|
||||
super().__init__()
|
||||
|
||||
self.HW_MTU = 1064
|
||||
|
||||
|
@ -89,7 +78,7 @@ class UDPInterface(Interface):
|
|||
self.server = socketserver.UDPServer(address, handlerFactory(self.processIncoming))
|
||||
|
||||
thread = threading.Thread(target=self.server.serve_forever)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
self.online = True
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
|
||||
import os
|
||||
import glob
|
||||
import RNS.Interfaces.Android
|
||||
|
||||
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
|
||||
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]
|
||||
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]
|
||||
|
|
576
RNS/Link.py
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -20,24 +20,18 @@
|
|||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from cryptography.hazmat.backends import default_backend
|
||||
from cryptography.hazmat.primitives import hashes
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey, Ed25519PublicKey
|
||||
from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey, X25519PublicKey
|
||||
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
|
||||
from cryptography.fernet import Fernet
|
||||
from RNS.Cryptography import X25519PrivateKey, X25519PublicKey, Ed25519PrivateKey, Ed25519PublicKey
|
||||
from RNS.Cryptography import Fernet
|
||||
from RNS.Channel import Channel, LinkChannelOutlet
|
||||
|
||||
from time import sleep
|
||||
from .vendor import umsgpack as umsgpack
|
||||
import threading
|
||||
import base64
|
||||
import inspect
|
||||
import math
|
||||
import time
|
||||
import RNS
|
||||
|
||||
import traceback
|
||||
|
||||
cio_default_backend = default_backend()
|
||||
|
||||
class LinkCallbacks:
|
||||
def __init__(self):
|
||||
|
@ -53,7 +47,7 @@ class Link:
|
|||
"""
|
||||
This class is used to establish and manage links to other peers. When a
|
||||
link instance is created, Reticulum will attempt to establish verified
|
||||
connectivity with the specified destination.
|
||||
and encrypted connectivity with the specified destination.
|
||||
|
||||
:param destination: A :ref:`RNS.Destination<api-destination>` instance which to establish a link to.
|
||||
:param established_callback: An optional function or method with the signature *callback(link)* to be called when the link has been established.
|
||||
|
@ -67,7 +61,7 @@ class Link:
|
|||
ECPUBSIZE = 32+32
|
||||
KEYSIZE = 32
|
||||
|
||||
MDU = math.floor((RNS.Reticulum.MTU-RNS.Reticulum.IFAC_MIN_SIZE-RNS.Reticulum.HEADER_MINSIZE-RNS.Identity.OPTIMISED_FERNET_OVERHEAD)/RNS.Identity.AES128_BLOCKSIZE)*RNS.Identity.AES128_BLOCKSIZE - 1
|
||||
MDU = math.floor((RNS.Reticulum.MTU-RNS.Reticulum.IFAC_MIN_SIZE-RNS.Reticulum.HEADER_MINSIZE-RNS.Identity.FERNET_OVERHEAD)/RNS.Identity.AES128_BLOCKSIZE)*RNS.Identity.AES128_BLOCKSIZE - 1
|
||||
|
||||
ESTABLISHMENT_TIMEOUT_PER_HOP = RNS.Reticulum.DEFAULT_PER_HOP_TIMEOUT
|
||||
"""
|
||||
|
@ -118,8 +112,10 @@ class Link:
|
|||
link = Link(owner = owner, peer_pub_bytes=data[:Link.ECPUBSIZE//2], peer_sig_pub_bytes=data[Link.ECPUBSIZE//2:Link.ECPUBSIZE])
|
||||
link.set_link_id(packet)
|
||||
link.destination = packet.destination
|
||||
link.establishment_timeout = Link.ESTABLISHMENT_TIMEOUT_PER_HOP * max(1, packet.hops)
|
||||
link.establishment_timeout = Link.ESTABLISHMENT_TIMEOUT_PER_HOP * max(1, packet.hops) + Link.KEEPALIVE
|
||||
link.establishment_cost += len(packet.raw)
|
||||
RNS.log("Validating link request "+RNS.prettyhexrep(link.link_id), RNS.LOG_VERBOSE)
|
||||
RNS.log(f"Establishment timeout is {RNS.prettytime(link.establishment_timeout)} for incoming link request "+RNS.prettyhexrep(link.link_id), RNS.LOG_EXTREME)
|
||||
link.handshake()
|
||||
link.attached_interface = packet.receiving_interface
|
||||
link.prove()
|
||||
|
@ -128,7 +124,7 @@ class Link:
|
|||
link.last_inbound = time.time()
|
||||
link.start_watchdog()
|
||||
|
||||
RNS.log("Incoming link request "+str(link)+" accepted", RNS.LOG_VERBOSE)
|
||||
RNS.log("Incoming link request "+str(link)+" accepted", RNS.LOG_DEBUG)
|
||||
return link
|
||||
|
||||
except Exception as e:
|
||||
|
@ -137,7 +133,7 @@ class Link:
|
|||
return None
|
||||
|
||||
else:
|
||||
RNS.log("Invalid link request payload size, dropping request", RNS.LOG_VERBOSE)
|
||||
RNS.log("Invalid link request payload size, dropping request", RNS.LOG_DEBUG)
|
||||
return None
|
||||
|
||||
|
||||
|
@ -145,6 +141,7 @@ class Link:
|
|||
if destination != None and destination.type != RNS.Destination.SINGLE:
|
||||
raise TypeError("Links can only be established to the \"single\" destination type")
|
||||
self.rtt = None
|
||||
self.establishment_cost = 0
|
||||
self.callbacks = LinkCallbacks()
|
||||
self.resource_strategy = Link.ACCEPT_NONE
|
||||
self.outgoing_resources = []
|
||||
|
@ -152,10 +149,15 @@ class Link:
|
|||
self.pending_requests = []
|
||||
self.last_inbound = 0
|
||||
self.last_outbound = 0
|
||||
self.last_proof = 0
|
||||
self.last_data = 0
|
||||
self.tx = 0
|
||||
self.rx = 0
|
||||
self.txbytes = 0
|
||||
self.rxbytes = 0
|
||||
self.rssi = None
|
||||
self.snr = None
|
||||
self.q = None
|
||||
self.traffic_timeout_factor = Link.TRAFFIC_TIMEOUT_FACTOR
|
||||
self.keepalive_timeout_factor = Link.KEEPALIVE_TIMEOUT_FACTOR
|
||||
self.keepalive = Link.KEEPALIVE
|
||||
|
@ -166,31 +168,31 @@ class Link:
|
|||
self.type = RNS.Destination.LINK
|
||||
self.owner = owner
|
||||
self.destination = destination
|
||||
self.expected_hops = None
|
||||
self.attached_interface = None
|
||||
self.__remote_identity = None
|
||||
self.__track_phy_stats = False
|
||||
self._channel = None
|
||||
|
||||
if self.destination == None:
|
||||
self.initiator = False
|
||||
self.prv = self.owner.identity.prv
|
||||
self.prv = X25519PrivateKey.generate()
|
||||
self.sig_prv = self.owner.identity.sig_prv
|
||||
else:
|
||||
self.initiator = True
|
||||
self.establishment_timeout = Link.ESTABLISHMENT_TIMEOUT_PER_HOP * max(1, RNS.Transport.hops_to(destination.hash))
|
||||
self.expected_hops = RNS.Transport.hops_to(self.destination.hash)
|
||||
self.establishment_timeout = RNS.Reticulum.get_instance().get_first_hop_timeout(destination.hash)
|
||||
self.establishment_timeout += Link.ESTABLISHMENT_TIMEOUT_PER_HOP * max(1, RNS.Transport.hops_to(destination.hash))
|
||||
self.prv = X25519PrivateKey.generate()
|
||||
self.sig_prv = Ed25519PrivateKey.generate()
|
||||
|
||||
self.fernet = None
|
||||
|
||||
self.pub = self.prv.public_key()
|
||||
self.pub_bytes = self.pub.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
self.pub_bytes = self.pub.public_bytes()
|
||||
|
||||
self.sig_pub = self.sig_prv.public_key()
|
||||
self.sig_pub_bytes = self.sig_pub.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
)
|
||||
self.sig_pub_bytes = self.sig_pub.public_bytes()
|
||||
|
||||
if peer_pub_bytes == None:
|
||||
self.peer_pub = None
|
||||
|
@ -205,20 +207,18 @@ class Link:
|
|||
self.set_link_closed_callback(closed_callback)
|
||||
|
||||
if (self.initiator):
|
||||
peer_pub_bytes = self.destination.identity.get_public_key()[:Link.ECPUBSIZE//2]
|
||||
peer_sig_pub_bytes = self.destination.identity.get_public_key()[Link.ECPUBSIZE//2:Link.ECPUBSIZE]
|
||||
self.request_data = self.pub_bytes+self.sig_pub_bytes
|
||||
self.packet = RNS.Packet(destination, self.request_data, packet_type=RNS.Packet.LINKREQUEST)
|
||||
self.packet.pack()
|
||||
self.establishment_cost += len(self.packet.raw)
|
||||
self.set_link_id(self.packet)
|
||||
self.load_peer(peer_pub_bytes, peer_sig_pub_bytes)
|
||||
self.handshake()
|
||||
RNS.Transport.register_link(self)
|
||||
self.request_time = time.time()
|
||||
self.start_watchdog()
|
||||
self.packet.send()
|
||||
self.had_outbound()
|
||||
RNS.log("Link request "+RNS.prettyhexrep(self.link_id)+" sent to "+str(self.destination), RNS.LOG_DEBUG)
|
||||
RNS.log(f"Establishment timeout is {RNS.prettytime(self.establishment_timeout)} for link request "+RNS.prettyhexrep(self.link_id), RNS.LOG_EXTREME)
|
||||
|
||||
|
||||
def load_peer(self, peer_pub_bytes, peer_sig_pub_bytes):
|
||||
|
@ -236,25 +236,28 @@ class Link:
|
|||
self.hash = self.link_id
|
||||
|
||||
def handshake(self):
|
||||
self.status = Link.HANDSHAKE
|
||||
self.shared_key = self.prv.exchange(self.peer_pub)
|
||||
if self.status == Link.PENDING and self.prv != None:
|
||||
self.status = Link.HANDSHAKE
|
||||
self.shared_key = self.prv.exchange(self.peer_pub)
|
||||
|
||||
self.derived_key = RNS.Cryptography.hkdf(
|
||||
length=32,
|
||||
derive_from=self.shared_key,
|
||||
salt=self.get_salt(),
|
||||
context=self.get_context(),
|
||||
)
|
||||
else:
|
||||
RNS.log("Handshake attempt on "+str(self)+" with invalid state "+str(self.status), RNS.LOG_ERROR)
|
||||
|
||||
# TODO: Improve this re-allocation of HKDF
|
||||
self.derived_key = HKDF(
|
||||
algorithm=hashes.SHA256(),
|
||||
length=32,
|
||||
salt=self.get_salt(),
|
||||
info=self.get_context(),
|
||||
backend=cio_default_backend,
|
||||
).derive(self.shared_key)
|
||||
|
||||
def prove(self):
|
||||
signed_data = self.link_id+self.pub_bytes+self.sig_pub_bytes
|
||||
signature = self.owner.identity.sign(signed_data)
|
||||
|
||||
proof_data = signature
|
||||
proof_data = signature+self.pub_bytes
|
||||
proof = RNS.Packet(self, proof_data, packet_type=RNS.Packet.PROOF, context=RNS.Packet.LRPROOF)
|
||||
proof.send()
|
||||
self.establishment_cost += len(proof.raw)
|
||||
self.had_outbound()
|
||||
|
||||
|
||||
|
@ -272,30 +275,50 @@ class Link:
|
|||
self.had_outbound()
|
||||
|
||||
def validate_proof(self, packet):
|
||||
if self.status == Link.HANDSHAKE:
|
||||
if self.initiator and len(packet.data) == RNS.Identity.SIGLENGTH//8:
|
||||
signed_data = self.link_id+self.peer_pub_bytes+self.peer_sig_pub_bytes
|
||||
signature = packet.data[:RNS.Identity.SIGLENGTH//8]
|
||||
|
||||
if self.destination.identity.validate(signature, signed_data):
|
||||
self.rtt = time.time() - self.request_time
|
||||
self.attached_interface = packet.receiving_interface
|
||||
self.__remote_identity = self.destination.identity
|
||||
RNS.Transport.activate_link(self)
|
||||
RNS.log("Link "+str(self)+" established with "+str(self.destination)+", RTT is "+str(round(self.rtt, 3))+"s", RNS.LOG_VERBOSE)
|
||||
rtt_data = umsgpack.packb(self.rtt)
|
||||
rtt_packet = RNS.Packet(self, rtt_data, context=RNS.Packet.LRRTT)
|
||||
rtt_packet.send()
|
||||
self.had_outbound()
|
||||
try:
|
||||
if self.status == Link.PENDING:
|
||||
if self.initiator and len(packet.data) == RNS.Identity.SIGLENGTH//8+Link.ECPUBSIZE//2:
|
||||
peer_pub_bytes = packet.data[RNS.Identity.SIGLENGTH//8:RNS.Identity.SIGLENGTH//8+Link.ECPUBSIZE//2]
|
||||
peer_sig_pub_bytes = self.destination.identity.get_public_key()[Link.ECPUBSIZE//2:Link.ECPUBSIZE]
|
||||
self.load_peer(peer_pub_bytes, peer_sig_pub_bytes)
|
||||
self.handshake()
|
||||
|
||||
self.status = Link.ACTIVE
|
||||
self.activated_at = time.time()
|
||||
if self.callbacks.link_established != None:
|
||||
thread = threading.Thread(target=self.callbacks.link_established, args=(self,))
|
||||
thread.setDaemon(True)
|
||||
thread.start()
|
||||
else:
|
||||
RNS.log("Invalid link proof signature received by "+str(self)+". Ignoring.", RNS.LOG_DEBUG)
|
||||
self.establishment_cost += len(packet.raw)
|
||||
signed_data = self.link_id+self.peer_pub_bytes+self.peer_sig_pub_bytes
|
||||
signature = packet.data[:RNS.Identity.SIGLENGTH//8]
|
||||
|
||||
if self.destination.identity.validate(signature, signed_data):
|
||||
if self.status != Link.HANDSHAKE:
|
||||
raise IOError("Invalid link state for proof validation: "+str(self.status))
|
||||
|
||||
self.rtt = time.time() - self.request_time
|
||||
self.attached_interface = packet.receiving_interface
|
||||
self.__remote_identity = self.destination.identity
|
||||
self.status = Link.ACTIVE
|
||||
self.activated_at = time.time()
|
||||
self.last_proof = self.activated_at
|
||||
RNS.Transport.activate_link(self)
|
||||
RNS.log("Link "+str(self)+" established with "+str(self.destination)+", RTT is "+str(round(self.rtt, 3))+"s", RNS.LOG_VERBOSE)
|
||||
|
||||
if self.rtt != None and self.establishment_cost != None and self.rtt > 0 and self.establishment_cost > 0:
|
||||
self.establishment_rate = self.establishment_cost/self.rtt
|
||||
|
||||
rtt_data = umsgpack.packb(self.rtt)
|
||||
rtt_packet = RNS.Packet(self, rtt_data, context=RNS.Packet.LRRTT)
|
||||
rtt_packet.send()
|
||||
self.had_outbound()
|
||||
|
||||
if self.callbacks.link_established != None:
|
||||
thread = threading.Thread(target=self.callbacks.link_established, args=(self,))
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
else:
|
||||
RNS.log("Invalid link proof signature received by "+str(self)+". Ignoring.", RNS.LOG_DEBUG)
|
||||
|
||||
except Exception as e:
|
||||
self.status = Link.CLOSED
|
||||
RNS.log("An error ocurred while validating link request proof on "+str(self)+".", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
|
||||
def identify(self, identity):
|
||||
|
@ -307,7 +330,7 @@ class Link:
|
|||
|
||||
:param identity: An RNS.Identity instance to identify as.
|
||||
"""
|
||||
if self.initiator:
|
||||
if self.initiator and self.status == Link.ACTIVE:
|
||||
signed_data = self.link_id + identity.get_public_key()
|
||||
signature = identity.sign(signed_data)
|
||||
proof_data = identity.get_public_key() + signature
|
||||
|
@ -333,7 +356,7 @@ class Link:
|
|||
packed_request = umsgpack.packb(unpacked_request)
|
||||
|
||||
if timeout == None:
|
||||
timeout = self.rtt * self.traffic_timeout_factor + RNS.Resource.RESPONSE_MAX_GRACE_TIME/4.0
|
||||
timeout = self.rtt * self.traffic_timeout_factor + RNS.Resource.RESPONSE_MAX_GRACE_TIME*1.125
|
||||
|
||||
if len(packed_request) <= Link.MDU:
|
||||
request_packet = RNS.Packet(self, packed_request, RNS.Packet.DATA, context = RNS.Packet.REQUEST)
|
||||
|
@ -371,25 +394,68 @@ class Link:
|
|||
|
||||
def rtt_packet(self, packet):
|
||||
try:
|
||||
# TODO: This is crude, we should use the delta
|
||||
# to model a more representative per-bit round
|
||||
# trip time, and use that to set a sensible RTT
|
||||
# expectancy for the link. This will have to do
|
||||
# for now though.
|
||||
measured_rtt = time.time() - self.request_time
|
||||
plaintext = self.decrypt(packet.data)
|
||||
rtt = umsgpack.unpackb(plaintext)
|
||||
self.rtt = max(measured_rtt, rtt)
|
||||
self.status = Link.ACTIVE
|
||||
self.activated_at = time.time()
|
||||
if plaintext != None:
|
||||
rtt = umsgpack.unpackb(plaintext)
|
||||
self.rtt = max(measured_rtt, rtt)
|
||||
self.status = Link.ACTIVE
|
||||
self.activated_at = time.time()
|
||||
|
||||
if self.rtt != None and self.establishment_cost != None and self.rtt > 0 and self.establishment_cost > 0:
|
||||
self.establishment_rate = self.establishment_cost/self.rtt
|
||||
|
||||
try:
|
||||
if self.owner.callbacks.link_established != None:
|
||||
self.owner.callbacks.link_established(self)
|
||||
except Exception as e:
|
||||
RNS.log("Error occurred in external link establishment callback. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
|
||||
if self.owner.callbacks.link_established != None:
|
||||
self.owner.callbacks.link_established(self)
|
||||
except Exception as e:
|
||||
RNS.log("Error occurred while processing RTT packet, tearing down link. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
self.teardown()
|
||||
|
||||
def track_phy_stats(self, track):
|
||||
"""
|
||||
You can enable physical layer statistics on a per-link basis. If this is enabled,
|
||||
and the link is running over an interface that supports reporting physical layer
|
||||
statistics, you will be able to retrieve stats such as *RSSI*, *SNR* and physical
|
||||
*Link Quality* for the link.
|
||||
|
||||
:param track: Whether or not to keep track of physical layer statistics. Value must be ``True`` or ``False``.
|
||||
"""
|
||||
if track:
|
||||
self.__track_phy_stats = True
|
||||
else:
|
||||
self.__track_phy_stats = False
|
||||
|
||||
def get_rssi(self):
|
||||
"""
|
||||
:returns: The physical layer *Received Signal Strength Indication* if available, otherwise ``None``. Physical layer statistics must be enabled on the link for this method to return a value.
|
||||
"""
|
||||
return self.rssi
|
||||
|
||||
def get_snr(self):
|
||||
"""
|
||||
:returns: The physical layer *Signal-to-Noise Ratio* if available, otherwise ``None``. Physical layer statistics must be enabled on the link for this method to return a value.
|
||||
"""
|
||||
return self.rssi
|
||||
|
||||
def get_q(self):
|
||||
"""
|
||||
:returns: The physical layer *Link Quality* if available, otherwise ``None``. Physical layer statistics must be enabled on the link for this method to return a value.
|
||||
"""
|
||||
return self.rssi
|
||||
|
||||
def get_establishment_rate(self):
|
||||
"""
|
||||
:returns: The data transfer rate at which the link establishment procedure ocurred, in bits per second.
|
||||
"""
|
||||
if self.establishment_rate != None:
|
||||
return self.establishment_rate*8
|
||||
else:
|
||||
return None
|
||||
|
||||
def get_salt(self):
|
||||
return self.link_id
|
||||
|
||||
|
@ -398,7 +464,7 @@ class Link:
|
|||
|
||||
def no_inbound_for(self):
|
||||
"""
|
||||
:returns: The time in seconds since last inbound packet on the link.
|
||||
:returns: The time in seconds since last inbound packet on the link. This includes keepalive packets.
|
||||
"""
|
||||
activated_at = self.activated_at if self.activated_at != None else 0
|
||||
last_inbound = max(self.last_inbound, activated_at)
|
||||
|
@ -406,24 +472,32 @@ class Link:
|
|||
|
||||
def no_outbound_for(self):
|
||||
"""
|
||||
:returns: The time in seconds since last outbound packet on the link.
|
||||
:returns: The time in seconds since last outbound packet on the link. This includes keepalive packets.
|
||||
"""
|
||||
return time.time() - self.last_outbound
|
||||
|
||||
def no_data_for(self):
|
||||
"""
|
||||
:returns: The time in seconds since payload data traversed the link. This excludes keepalive packets.
|
||||
"""
|
||||
return time.time() - self.last_data
|
||||
|
||||
def inactive_for(self):
|
||||
"""
|
||||
:returns: The time in seconds since activity on the link.
|
||||
:returns: The time in seconds since activity on the link. This includes keepalive packets.
|
||||
"""
|
||||
return min(self.no_inbound_for(), self.no_outbound_for())
|
||||
|
||||
def get_remote_identity(self):
|
||||
"""
|
||||
:returns: The identity of the remote peer, if it is known
|
||||
:returns: The identity of the remote peer, if it is known. Calling this method will not query the remote initiator to reveal its identity. Returns ``None`` if the link initiator has not already independently called the ``identify(identity)`` method.
|
||||
"""
|
||||
return self.__remote_identity
|
||||
|
||||
def had_outbound(self):
|
||||
def had_outbound(self, is_keepalive=False):
|
||||
self.last_outbound = time.time()
|
||||
if not is_keepalive:
|
||||
self.last_data = self.last_outbound
|
||||
|
||||
def teardown(self):
|
||||
"""
|
||||
|
@ -450,6 +524,7 @@ class Link:
|
|||
self.teardown_reason = Link.DESTINATION_CLOSED
|
||||
else:
|
||||
self.teardown_reason = Link.INITIATOR_CLOSED
|
||||
self.__update_phy_stats(packet)
|
||||
self.link_closed()
|
||||
except Exception as e:
|
||||
pass
|
||||
|
@ -459,6 +534,8 @@ class Link:
|
|||
resource.cancel()
|
||||
for resource in self.outgoing_resources:
|
||||
resource.cancel()
|
||||
if self._channel:
|
||||
self._channel._shutdown()
|
||||
|
||||
self.prv = None
|
||||
self.pub = None
|
||||
|
@ -480,13 +557,17 @@ class Link:
|
|||
|
||||
def start_watchdog(self):
|
||||
thread = threading.Thread(target=self.__watchdog_job)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
def __watchdog_job(self):
|
||||
while not self.status == Link.CLOSED:
|
||||
while (self.watchdog_lock):
|
||||
sleep(max(self.rtt, 0.025))
|
||||
rtt_wait = 0.025
|
||||
if hasattr(self, "rtt") and self.rtt:
|
||||
rtt_wait = self.rtt
|
||||
|
||||
sleep(max(rtt_wait, 0.025))
|
||||
|
||||
if not self.status == Link.CLOSED:
|
||||
# Link was initiated, but no response
|
||||
|
@ -505,19 +586,19 @@ class Link:
|
|||
next_check = self.request_time + self.establishment_timeout
|
||||
sleep_time = next_check - time.time()
|
||||
if time.time() >= self.request_time + self.establishment_timeout:
|
||||
if self.initiator:
|
||||
RNS.log("Timeout waiting for link request proof", RNS.LOG_DEBUG)
|
||||
else:
|
||||
RNS.log("Timeout waiting for RTT packet from link initiator", RNS.LOG_DEBUG)
|
||||
|
||||
self.status = Link.CLOSED
|
||||
self.teardown_reason = Link.TIMEOUT
|
||||
self.link_closed()
|
||||
sleep_time = 0.001
|
||||
|
||||
if self.initiator:
|
||||
RNS.log("Timeout waiting for link request proof", RNS.LOG_DEBUG)
|
||||
else:
|
||||
RNS.log("Timeout waiting for RTT packet from link initiator", RNS.LOG_DEBUG)
|
||||
|
||||
elif self.status == Link.ACTIVE:
|
||||
activated_at = self.activated_at if self.activated_at != None else 0
|
||||
last_inbound = max(self.last_inbound, activated_at)
|
||||
last_inbound = max(max(self.last_inbound, self.last_proof), activated_at)
|
||||
|
||||
if time.time() >= last_inbound + self.keepalive:
|
||||
if self.initiator:
|
||||
|
@ -549,10 +630,25 @@ class Link:
|
|||
sleep(sleep_time)
|
||||
|
||||
|
||||
def __update_phy_stats(self, packet, query_shared = True):
|
||||
if self.__track_phy_stats:
|
||||
if query_shared:
|
||||
reticulum = RNS.Reticulum.get_instance()
|
||||
if packet.rssi == None: packet.rssi = reticulum.get_packet_rssi(packet.packet_hash)
|
||||
if packet.snr == None: packet.snr = reticulum.get_packet_snr(packet.packet_hash)
|
||||
if packet.q == None: packet.q = reticulum.get_packet_q(packet.packet_hash)
|
||||
|
||||
if packet.rssi != None:
|
||||
self.rssi = packet.rssi
|
||||
if packet.snr != None:
|
||||
self.snr = packet.snr
|
||||
if packet.q != None:
|
||||
self.q = packet.q
|
||||
|
||||
def send_keepalive(self):
|
||||
keepalive_packet = RNS.Packet(self, bytes([0xFF]), context=RNS.Packet.KEEPALIVE)
|
||||
keepalive_packet.send()
|
||||
self.had_outbound()
|
||||
self.had_outbound(is_keepalive = True)
|
||||
|
||||
def handle_request(self, request_id, unpacked_request):
|
||||
if self.status == Link.ACTIVE:
|
||||
|
@ -577,7 +673,13 @@ class Link:
|
|||
|
||||
if allowed:
|
||||
RNS.log("Handling request "+RNS.prettyhexrep(request_id)+" for: "+str(path), RNS.LOG_DEBUG)
|
||||
response = response_generator(path, request_data, request_id, self.__remote_identity, requested_at)
|
||||
if len(inspect.signature(response_generator).parameters) == 5:
|
||||
response = response_generator(path, request_data, request_id, self.__remote_identity, requested_at)
|
||||
elif len(inspect.signature(response_generator).parameters) == 6:
|
||||
response = response_generator(path, request_data, request_id, self.link_id, self.__remote_identity, requested_at)
|
||||
else:
|
||||
raise TypeError("Invalid signature for response generator callback")
|
||||
|
||||
if response != None:
|
||||
packed_response = umsgpack.packb([request_id, response])
|
||||
|
||||
|
@ -597,7 +699,9 @@ class Link:
|
|||
remove = pending_request
|
||||
try:
|
||||
pending_request.response_size = response_size
|
||||
pending_request.response_transfer_size = response_transfer_size
|
||||
if pending_request.response_transfer_size == None:
|
||||
pending_request.response_transfer_size = 0
|
||||
pending_request.response_transfer_size += response_transfer_size
|
||||
pending_request.response_received(response_data)
|
||||
except Exception as e:
|
||||
RNS.log("Error occurred while handling response. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
@ -633,6 +737,16 @@ class Link:
|
|||
if pending_request.request_id == resource.request_id:
|
||||
pending_request.request_timed_out(None)
|
||||
|
||||
def get_channel(self):
|
||||
"""
|
||||
Get the ``Channel`` for this link.
|
||||
|
||||
:return: ``Channel`` object
|
||||
"""
|
||||
if self._channel is None:
|
||||
self._channel = Channel(LinkChannelOutlet(self))
|
||||
return self._channel
|
||||
|
||||
def receive(self, packet):
|
||||
self.watchdog_lock = True
|
||||
if not self.status == Link.CLOSED and not (self.initiator and packet.context == RNS.Packet.KEEPALIVE and packet.data == bytes([0xFF])):
|
||||
|
@ -640,134 +754,167 @@ class Link:
|
|||
RNS.log("Link-associated packet received on unexpected interface! Someone might be trying to manipulate your communication!", RNS.LOG_ERROR)
|
||||
else:
|
||||
self.last_inbound = time.time()
|
||||
if packet.context != RNS.Packet.KEEPALIVE:
|
||||
self.last_data = self.last_inbound
|
||||
self.rx += 1
|
||||
self.rxbytes += len(packet.data)
|
||||
if self.status == Link.STALE:
|
||||
self.status = Link.ACTIVE
|
||||
|
||||
if packet.packet_type == RNS.Packet.DATA:
|
||||
should_query = False
|
||||
if packet.context == RNS.Packet.NONE:
|
||||
plaintext = self.decrypt(packet.data)
|
||||
if self.callbacks.packet != None:
|
||||
thread = threading.Thread(target=self.callbacks.packet, args=(plaintext, packet))
|
||||
thread.setDaemon(True)
|
||||
thread.start()
|
||||
|
||||
if self.destination.proof_strategy == RNS.Destination.PROVE_ALL:
|
||||
packet.prove()
|
||||
if plaintext != None:
|
||||
if self.callbacks.packet != None:
|
||||
thread = threading.Thread(target=self.callbacks.packet, args=(plaintext, packet))
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
if self.destination.proof_strategy == RNS.Destination.PROVE_ALL:
|
||||
packet.prove()
|
||||
should_query = True
|
||||
|
||||
elif self.destination.proof_strategy == RNS.Destination.PROVE_APP:
|
||||
if self.destination.callbacks.proof_requested:
|
||||
try:
|
||||
self.destination.callbacks.proof_requested(packet)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing proof request callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
elif self.destination.proof_strategy == RNS.Destination.PROVE_APP:
|
||||
if self.destination.callbacks.proof_requested:
|
||||
try:
|
||||
if self.destination.callbacks.proof_requested(packet):
|
||||
packet.prove()
|
||||
should_query = True
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing proof request callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
self.__update_phy_stats(packet, query_shared=should_query)
|
||||
|
||||
elif packet.context == RNS.Packet.LINKIDENTIFY:
|
||||
plaintext = self.decrypt(packet.data)
|
||||
if plaintext != None:
|
||||
if not self.initiator and len(plaintext) == RNS.Identity.KEYSIZE//8 + RNS.Identity.SIGLENGTH//8:
|
||||
public_key = plaintext[:RNS.Identity.KEYSIZE//8]
|
||||
signed_data = self.link_id+public_key
|
||||
signature = plaintext[RNS.Identity.KEYSIZE//8:RNS.Identity.KEYSIZE//8+RNS.Identity.SIGLENGTH//8]
|
||||
identity = RNS.Identity(create_keys=False)
|
||||
identity.load_public_key(public_key)
|
||||
|
||||
if not self.initiator and len(plaintext) == RNS.Identity.KEYSIZE//8 + RNS.Identity.SIGLENGTH//8:
|
||||
public_key = plaintext[:RNS.Identity.KEYSIZE//8]
|
||||
signed_data = self.link_id+public_key
|
||||
signature = plaintext[RNS.Identity.KEYSIZE//8:RNS.Identity.KEYSIZE//8+RNS.Identity.SIGLENGTH//8]
|
||||
identity = RNS.Identity(create_keys=False)
|
||||
identity.load_public_key(public_key)
|
||||
|
||||
if identity.validate(signature, signed_data):
|
||||
self.__remote_identity = identity
|
||||
if self.callbacks.remote_identified != None:
|
||||
try:
|
||||
self.callbacks.remote_identified(self, self.__remote_identity)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing remote identified callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
if identity.validate(signature, signed_data):
|
||||
self.__remote_identity = identity
|
||||
if self.callbacks.remote_identified != None:
|
||||
try:
|
||||
self.callbacks.remote_identified(self, self.__remote_identity)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing remote identified callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
|
||||
elif packet.context == RNS.Packet.REQUEST:
|
||||
try:
|
||||
request_id = packet.getTruncatedHash()
|
||||
packed_request = self.decrypt(packet.data)
|
||||
unpacked_request = umsgpack.unpackb(packed_request)
|
||||
self.handle_request(request_id, unpacked_request)
|
||||
if packed_request != None:
|
||||
unpacked_request = umsgpack.unpackb(packed_request)
|
||||
self.handle_request(request_id, unpacked_request)
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
except Exception as e:
|
||||
RNS.log("Error occurred while handling request. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
elif packet.context == RNS.Packet.RESPONSE:
|
||||
try:
|
||||
packed_response = self.decrypt(packet.data)
|
||||
unpacked_response = umsgpack.unpackb(packed_response)
|
||||
request_id = unpacked_response[0]
|
||||
response_data = unpacked_response[1]
|
||||
transfer_size = len(umsgpack.packb(response_data))-2
|
||||
self.handle_response(request_id, response_data, transfer_size, transfer_size)
|
||||
if packed_response != None:
|
||||
unpacked_response = umsgpack.unpackb(packed_response)
|
||||
request_id = unpacked_response[0]
|
||||
response_data = unpacked_response[1]
|
||||
transfer_size = len(umsgpack.packb(response_data))-2
|
||||
self.handle_response(request_id, response_data, transfer_size, transfer_size)
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
except Exception as e:
|
||||
RNS.log("Error occurred while handling response. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
elif packet.context == RNS.Packet.LRRTT:
|
||||
if not self.initiator:
|
||||
self.rtt_packet(packet)
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
|
||||
elif packet.context == RNS.Packet.LINKCLOSE:
|
||||
self.teardown_packet(packet)
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
|
||||
elif packet.context == RNS.Packet.RESOURCE_ADV:
|
||||
packet.plaintext = self.decrypt(packet.data)
|
||||
if packet.plaintext != None:
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
|
||||
if RNS.ResourceAdvertisement.is_request(packet):
|
||||
RNS.Resource.accept(packet, callback=self.request_resource_concluded)
|
||||
elif RNS.ResourceAdvertisement.is_response(packet):
|
||||
request_id = RNS.ResourceAdvertisement.read_request_id(packet)
|
||||
for pending_request in self.pending_requests:
|
||||
if pending_request.request_id == request_id:
|
||||
RNS.Resource.accept(packet, callback=self.response_resource_concluded, progress_callback=pending_request.response_resource_progress, request_id = request_id)
|
||||
pending_request.response_size = RNS.ResourceAdvertisement.read_size(packet)
|
||||
pending_request.response_transfer_size = RNS.ResourceAdvertisement.read_transfer_size(packet)
|
||||
pending_request.started_at = time.time()
|
||||
elif self.resource_strategy == Link.ACCEPT_NONE:
|
||||
pass
|
||||
elif self.resource_strategy == Link.ACCEPT_APP:
|
||||
if self.callbacks.resource != None:
|
||||
try:
|
||||
resource_advertisement = RNS.ResourceAdvertisement.unpack(packet.plaintext)
|
||||
resource_advertisement.link = self
|
||||
if self.callbacks.resource(resource_advertisement):
|
||||
RNS.Resource.accept(packet, self.callbacks.resource_concluded)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing resource accept callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
elif self.resource_strategy == Link.ACCEPT_ALL:
|
||||
RNS.Resource.accept(packet, self.callbacks.resource_concluded)
|
||||
if RNS.ResourceAdvertisement.is_request(packet):
|
||||
RNS.Resource.accept(packet, callback=self.request_resource_concluded)
|
||||
elif RNS.ResourceAdvertisement.is_response(packet):
|
||||
request_id = RNS.ResourceAdvertisement.read_request_id(packet)
|
||||
for pending_request in self.pending_requests:
|
||||
if pending_request.request_id == request_id:
|
||||
response_resource = RNS.Resource.accept(packet, callback=self.response_resource_concluded, progress_callback=pending_request.response_resource_progress, request_id = request_id)
|
||||
if response_resource != None:
|
||||
if pending_request.response_size == None:
|
||||
pending_request.response_size = RNS.ResourceAdvertisement.read_size(packet)
|
||||
if pending_request.response_transfer_size == None:
|
||||
pending_request.response_transfer_size = 0
|
||||
pending_request.response_transfer_size += RNS.ResourceAdvertisement.read_transfer_size(packet)
|
||||
if pending_request.started_at == None:
|
||||
pending_request.started_at = time.time()
|
||||
pending_request.response_resource_progress(response_resource)
|
||||
|
||||
elif self.resource_strategy == Link.ACCEPT_NONE:
|
||||
pass
|
||||
elif self.resource_strategy == Link.ACCEPT_APP:
|
||||
if self.callbacks.resource != None:
|
||||
try:
|
||||
resource_advertisement = RNS.ResourceAdvertisement.unpack(packet.plaintext)
|
||||
resource_advertisement.link = self
|
||||
if self.callbacks.resource(resource_advertisement):
|
||||
RNS.Resource.accept(packet, self.callbacks.resource_concluded)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing resource accept callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
elif self.resource_strategy == Link.ACCEPT_ALL:
|
||||
RNS.Resource.accept(packet, self.callbacks.resource_concluded)
|
||||
|
||||
elif packet.context == RNS.Packet.RESOURCE_REQ:
|
||||
plaintext = self.decrypt(packet.data)
|
||||
if ord(plaintext[:1]) == RNS.Resource.HASHMAP_IS_EXHAUSTED:
|
||||
resource_hash = plaintext[1+RNS.Resource.MAPHASH_LEN:RNS.Identity.HASHLENGTH//8+1+RNS.Resource.MAPHASH_LEN]
|
||||
else:
|
||||
resource_hash = plaintext[1:RNS.Identity.HASHLENGTH//8+1]
|
||||
for resource in self.outgoing_resources:
|
||||
if resource.hash == resource_hash:
|
||||
# We need to check that this request has not been
|
||||
# received before in order to avoid sequencing errors.
|
||||
if not packet.packet_hash in resource.req_hashlist:
|
||||
resource.req_hashlist.append(packet.packet_hash)
|
||||
resource.request(plaintext)
|
||||
if plaintext != None:
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
if ord(plaintext[:1]) == RNS.Resource.HASHMAP_IS_EXHAUSTED:
|
||||
resource_hash = plaintext[1+RNS.Resource.MAPHASH_LEN:RNS.Identity.HASHLENGTH//8+1+RNS.Resource.MAPHASH_LEN]
|
||||
else:
|
||||
resource_hash = plaintext[1:RNS.Identity.HASHLENGTH//8+1]
|
||||
|
||||
for resource in self.outgoing_resources:
|
||||
if resource.hash == resource_hash:
|
||||
# We need to check that this request has not been
|
||||
# received before in order to avoid sequencing errors.
|
||||
if not packet.packet_hash in resource.req_hashlist:
|
||||
resource.req_hashlist.append(packet.packet_hash)
|
||||
resource.request(plaintext)
|
||||
|
||||
elif packet.context == RNS.Packet.RESOURCE_HMU:
|
||||
plaintext = self.decrypt(packet.data)
|
||||
resource_hash = plaintext[:RNS.Identity.HASHLENGTH//8]
|
||||
for resource in self.incoming_resources:
|
||||
if resource_hash == resource.hash:
|
||||
resource.hashmap_update_packet(plaintext)
|
||||
if plaintext != None:
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
resource_hash = plaintext[:RNS.Identity.HASHLENGTH//8]
|
||||
for resource in self.incoming_resources:
|
||||
if resource_hash == resource.hash:
|
||||
resource.hashmap_update_packet(plaintext)
|
||||
|
||||
elif packet.context == RNS.Packet.RESOURCE_ICL:
|
||||
plaintext = self.decrypt(packet.data)
|
||||
resource_hash = plaintext[:RNS.Identity.HASHLENGTH//8]
|
||||
for resource in self.incoming_resources:
|
||||
if resource_hash == resource.hash:
|
||||
resource.cancel()
|
||||
if plaintext != None:
|
||||
self.__update_phy_stats(packet)
|
||||
resource_hash = plaintext[:RNS.Identity.HASHLENGTH//8]
|
||||
for resource in self.incoming_resources:
|
||||
if resource_hash == resource.hash:
|
||||
resource.cancel()
|
||||
|
||||
elif packet.context == RNS.Packet.KEEPALIVE:
|
||||
if not self.initiator and packet.data == bytes([0xFF]):
|
||||
keepalive_packet = RNS.Packet(self, bytes([0xFE]), context=RNS.Packet.KEEPALIVE)
|
||||
keepalive_packet.send()
|
||||
self.had_outbound()
|
||||
self.had_outbound(is_keepalive = True)
|
||||
|
||||
|
||||
# TODO: find the most efficient way to allow multiple
|
||||
|
@ -777,6 +924,17 @@ class Link:
|
|||
elif packet.context == RNS.Packet.RESOURCE:
|
||||
for resource in self.incoming_resources:
|
||||
resource.receive_part(packet)
|
||||
self.__update_phy_stats(packet)
|
||||
|
||||
elif packet.context == RNS.Packet.CHANNEL:
|
||||
if not self._channel:
|
||||
RNS.log(f"Channel data received without open channel", RNS.LOG_DEBUG)
|
||||
else:
|
||||
packet.prove()
|
||||
plaintext = self.decrypt(packet.data)
|
||||
if plaintext != None:
|
||||
self.__update_phy_stats(packet)
|
||||
self._channel._receive(plaintext)
|
||||
|
||||
elif packet.packet_type == RNS.Packet.PROOF:
|
||||
if packet.context == RNS.Packet.RESOURCE_PRF:
|
||||
|
@ -784,6 +942,7 @@ class Link:
|
|||
for resource in self.outgoing_resources:
|
||||
if resource_hash == resource.hash:
|
||||
resource.validate_proof(packet.data)
|
||||
self.__update_phy_stats(packet, query_shared=True)
|
||||
|
||||
self.watchdog_lock = False
|
||||
|
||||
|
@ -792,21 +951,12 @@ class Link:
|
|||
try:
|
||||
if not self.fernet:
|
||||
try:
|
||||
self.fernet = Fernet(base64.urlsafe_b64encode(self.derived_key))
|
||||
self.fernet = Fernet(self.derived_key)
|
||||
except Exception as e:
|
||||
RNS.log("Could not "+str(self)+" instantiate Fernet while performin encryption on link. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
RNS.log("Could not instantiate Fernet while performin encryption on link "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
raise e
|
||||
|
||||
# The fernet token VERSION field is stripped here and
|
||||
# reinserted on the receiving end, since it is always
|
||||
# set to 0x80.
|
||||
#
|
||||
# Since we're also quite content with supporting time-
|
||||
# stamps until the year 8921556 AD, we'll also strip 2
|
||||
# bytes from the timestamp field and reinsert those as
|
||||
# 0x00 when received.
|
||||
ciphertext = base64.urlsafe_b64decode(self.fernet.encrypt(plaintext))[3:]
|
||||
return ciphertext
|
||||
return self.fernet.encrypt(plaintext)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Encryption on link "+str(self)+" failed. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
@ -816,15 +966,13 @@ class Link:
|
|||
def decrypt(self, ciphertext):
|
||||
try:
|
||||
if not self.fernet:
|
||||
self.fernet = Fernet(base64.urlsafe_b64encode(self.derived_key))
|
||||
self.fernet = Fernet(self.derived_key)
|
||||
|
||||
plaintext = self.fernet.decrypt(base64.urlsafe_b64encode(bytes([RNS.Identity.FERNET_VERSION, 0x00, 0x00])+ciphertext))
|
||||
return plaintext
|
||||
return self.fernet.decrypt(ciphertext)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Decryption failed on link "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
# RNS.log(traceback.format_exc(), RNS.LOG_ERROR)
|
||||
# TODO: Think long about implications here
|
||||
# self.teardown()
|
||||
return None
|
||||
|
||||
|
||||
def sign(self, message):
|
||||
|
@ -920,6 +1068,13 @@ class Link:
|
|||
def register_incoming_resource(self, resource):
|
||||
self.incoming_resources.append(resource)
|
||||
|
||||
def has_incoming_resource(self, resource):
|
||||
for incoming_resource in self.incoming_resources:
|
||||
if incoming_resource.hash == resource.hash:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def cancel_outgoing_resource(self, resource):
|
||||
if resource in self.outgoing_resources:
|
||||
self.outgoing_resources.remove(resource)
|
||||
|
@ -998,11 +1153,12 @@ class RequestReceipt():
|
|||
def request_resource_concluded(self, resource):
|
||||
if resource.status == RNS.Resource.COMPLETE:
|
||||
RNS.log("Request "+RNS.prettyhexrep(self.request_id)+" successfully sent as resource.", RNS.LOG_DEBUG)
|
||||
self.started_at = time.time()
|
||||
if self.started_at == None:
|
||||
self.started_at = time.time()
|
||||
self.status = RequestReceipt.DELIVERED
|
||||
self.__resource_response_timeout = time.time()+self.timeout
|
||||
response_timeout_thread = threading.Thread(target=self.__response_timeout_job)
|
||||
response_timeout_thread.setDaemon(True)
|
||||
response_timeout_thread.daemon = True
|
||||
response_timeout_thread.start()
|
||||
else:
|
||||
RNS.log("Sending request "+RNS.prettyhexrep(self.request_id)+" as resource failed with status: "+RNS.hexrep([resource.status]), RNS.LOG_DEBUG)
|
||||
|
@ -1039,24 +1195,26 @@ class RequestReceipt():
|
|||
|
||||
|
||||
def response_resource_progress(self, resource):
|
||||
if not self.status == RequestReceipt.FAILED:
|
||||
self.status = RequestReceipt.RECEIVING
|
||||
if self.packet_receipt != None:
|
||||
self.packet_receipt.status = RNS.PacketReceipt.DELIVERED
|
||||
self.packet_receipt.proved = True
|
||||
self.packet_receipt.concluded_at = time.time()
|
||||
if self.packet_receipt.callbacks.delivery != None:
|
||||
self.packet_receipt.callbacks.delivery(self.packet_receipt)
|
||||
if resource != None:
|
||||
if not self.status == RequestReceipt.FAILED:
|
||||
self.status = RequestReceipt.RECEIVING
|
||||
if self.packet_receipt != None:
|
||||
if self.packet_receipt.status != RNS.PacketReceipt.DELIVERED:
|
||||
self.packet_receipt.status = RNS.PacketReceipt.DELIVERED
|
||||
self.packet_receipt.proved = True
|
||||
self.packet_receipt.concluded_at = time.time()
|
||||
if self.packet_receipt.callbacks.delivery != None:
|
||||
self.packet_receipt.callbacks.delivery(self.packet_receipt)
|
||||
|
||||
self.progress = resource.get_progress()
|
||||
|
||||
if self.callbacks.progress != None:
|
||||
try:
|
||||
self.callbacks.progress(self)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing response progress callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
else:
|
||||
resource.cancel()
|
||||
self.progress = resource.get_progress()
|
||||
|
||||
if self.callbacks.progress != None:
|
||||
try:
|
||||
self.callbacks.progress(self)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing response progress callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
else:
|
||||
resource.cancel()
|
||||
|
||||
|
||||
def response_received(self, response):
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -29,15 +29,19 @@ import RNS
|
|||
class Packet:
|
||||
"""
|
||||
The Packet class is used to create packet instances that can be sent
|
||||
over a Reticulum network. Packets to will automatically be encrypted if
|
||||
they are adressed to a ``RNS.Destination.SINGLE`` destination,
|
||||
over a Reticulum network. Packets will automatically be encrypted if
|
||||
they are addressed to a ``RNS.Destination.SINGLE`` destination,
|
||||
``RNS.Destination.GROUP`` destination or a :ref:`RNS.Link<api-link>`.
|
||||
|
||||
For ``RNS.Destination.GROUP`` destinations, Reticulum will use the
|
||||
pre-shared key configured for the destination.
|
||||
pre-shared key configured for the destination. All packets to group
|
||||
destinations are encrypted with the same AES-128 key.
|
||||
|
||||
For ``RNS.Destination.SINGLE`` destinations and :ref:`RNS.Link<api-link>`
|
||||
destinations, reticulum will use ephemeral keys, and offers **Forward Secrecy**.
|
||||
For ``RNS.Destination.SINGLE`` destinations, Reticulum will use a newly
|
||||
derived ephemeral AES-128 key for every packet.
|
||||
|
||||
For :ref:`RNS.Link<api-link>` destinations, Reticulum will use per-link
|
||||
ephemeral keys, and offers **Forward Secrecy**.
|
||||
|
||||
:param destination: A :ref:`RNS.Destination<api-destination>` instance to which the packet will be sent.
|
||||
:param data: The data payload to be included in the packet as *bytes*.
|
||||
|
@ -54,9 +58,7 @@ class Packet:
|
|||
# Header types
|
||||
HEADER_1 = 0x00 # Normal header format
|
||||
HEADER_2 = 0x01 # Header format used for packets in transport
|
||||
HEADER_3 = 0x02 # Reserved
|
||||
HEADER_4 = 0x03 # Reserved
|
||||
header_types = [HEADER_1, HEADER_2, HEADER_3, HEADER_4]
|
||||
header_types = [HEADER_1, HEADER_2]
|
||||
|
||||
# Packet context types
|
||||
NONE = 0x00 # Generic data packet
|
||||
|
@ -73,6 +75,7 @@ class Packet:
|
|||
PATH_RESPONSE = 0x0B # Packet is a response to a path request
|
||||
COMMAND = 0x0C # Packet is a command
|
||||
COMMAND_STATUS = 0x0D # Packet is a status of an executed command
|
||||
CHANNEL = 0x0E # Packet contains link channel data
|
||||
KEEPALIVE = 0xFA # Packet is a keepalive packet
|
||||
LINKIDENTIFY = 0xFB # Packet is a link peer identification proof
|
||||
LINKCLOSE = 0xFC # Packet is a link close message
|
||||
|
@ -135,10 +138,11 @@ class Packet:
|
|||
self.receiving_interface = None
|
||||
self.rssi = None
|
||||
self.snr = None
|
||||
self.q = None
|
||||
|
||||
def get_packed_flags(self):
|
||||
if self.context == Packet.LRPROOF:
|
||||
packed_flags = (self.header_type << 6) | (self.transport_type << 4) | RNS.Destination.LINK | self.packet_type
|
||||
packed_flags = (self.header_type << 6) | (self.transport_type << 4) | (RNS.Destination.LINK << 2) | self.packet_type
|
||||
else:
|
||||
packed_flags = (self.header_type << 6) | (self.transport_type << 4) | (self.destination.type << 2) | self.packet_type
|
||||
return packed_flags
|
||||
|
@ -169,8 +173,8 @@ class Packet:
|
|||
# Packet proofs over links are not encrypted
|
||||
self.ciphertext = self.data
|
||||
elif self.context == Packet.RESOURCE:
|
||||
# A resource takes care of symmetric
|
||||
# encryption by itself
|
||||
# A resource takes care of encryption
|
||||
# by itself
|
||||
self.ciphertext = self.data
|
||||
elif self.context == Packet.KEEPALIVE:
|
||||
# Keepalive packets contain no actual
|
||||
|
@ -211,21 +215,23 @@ class Packet:
|
|||
self.flags = self.raw[0]
|
||||
self.hops = self.raw[1]
|
||||
|
||||
self.header_type = (self.flags & 0b11000000) >> 6
|
||||
self.header_type = (self.flags & 0b01000000) >> 6
|
||||
self.transport_type = (self.flags & 0b00110000) >> 4
|
||||
self.destination_type = (self.flags & 0b00001100) >> 2
|
||||
self.packet_type = (self.flags & 0b00000011)
|
||||
|
||||
DST_LEN = RNS.Reticulum.TRUNCATED_HASHLENGTH//8
|
||||
|
||||
if self.header_type == Packet.HEADER_2:
|
||||
self.transport_id = self.raw[2:12]
|
||||
self.destination_hash = self.raw[12:22]
|
||||
self.context = ord(self.raw[22:23])
|
||||
self.data = self.raw[23:]
|
||||
self.transport_id = self.raw[2:DST_LEN+2]
|
||||
self.destination_hash = self.raw[DST_LEN+2:2*DST_LEN+2]
|
||||
self.context = ord(self.raw[2*DST_LEN+2:2*DST_LEN+3])
|
||||
self.data = self.raw[2*DST_LEN+3:]
|
||||
else:
|
||||
self.transport_id = None
|
||||
self.destination_hash = self.raw[2:12]
|
||||
self.context = ord(self.raw[12:13])
|
||||
self.data = self.raw[13:]
|
||||
self.destination_hash = self.raw[2:DST_LEN+2]
|
||||
self.context = ord(self.raw[DST_LEN+2:DST_LEN+3])
|
||||
self.data = self.raw[DST_LEN+3:]
|
||||
|
||||
self.packed = False
|
||||
self.update_hash()
|
||||
|
@ -252,7 +258,7 @@ class Packet:
|
|||
|
||||
if not self.packed:
|
||||
self.pack()
|
||||
|
||||
|
||||
if RNS.Transport.outbound(self):
|
||||
return self.receipt
|
||||
else:
|
||||
|
@ -271,6 +277,10 @@ class Packet:
|
|||
:returns: A :ref:`RNS.PacketReceipt<api-packetreceipt>` instance if *create_receipt* was set to *True* when the packet was instantiated, if not returns *None*. If the packet could not be sent *False* is returned.
|
||||
"""
|
||||
if self.sent:
|
||||
# Re-pack the packet to obtain new ciphertext for
|
||||
# encrypted destinations
|
||||
self.pack()
|
||||
|
||||
if RNS.Transport.outbound(self):
|
||||
return self.receipt
|
||||
else:
|
||||
|
@ -313,7 +323,7 @@ class Packet:
|
|||
def get_hashable_part(self):
|
||||
hashable_part = bytes([self.raw[0] & 0b00001111])
|
||||
if self.header_type == Packet.HEADER_2:
|
||||
hashable_part += self.raw[12:]
|
||||
hashable_part += self.raw[(RNS.Identity.TRUNCATED_HASHLENGTH//8)+2:]
|
||||
else:
|
||||
hashable_part += self.raw[2:]
|
||||
|
||||
|
@ -321,7 +331,7 @@ class Packet:
|
|||
|
||||
class ProofDestination:
|
||||
def __init__(self, packet):
|
||||
self.hash = packet.get_hash()[:10];
|
||||
self.hash = packet.get_hash()[:RNS.Reticulum.TRUNCATED_HASHLENGTH//8];
|
||||
self.type = RNS.Destination.SINGLE
|
||||
|
||||
def encrypt(self, plaintext):
|
||||
|
@ -361,8 +371,8 @@ class PacketReceipt:
|
|||
if packet.destination.type == RNS.Destination.LINK:
|
||||
self.timeout = packet.destination.rtt * packet.destination.traffic_timeout_factor
|
||||
else:
|
||||
self.timeout = Packet.TIMEOUT_PER_HOP * RNS.Transport.hops_to(self.destination.hash)
|
||||
|
||||
self.timeout = RNS.Reticulum.get_instance().get_first_hop_timeout(self.destination.hash)
|
||||
self.timeout += Packet.TIMEOUT_PER_HOP * RNS.Transport.hops_to(self.destination.hash)
|
||||
|
||||
def get_status(self):
|
||||
"""
|
||||
|
@ -391,9 +401,15 @@ class PacketReceipt:
|
|||
self.proved = True
|
||||
self.concluded_at = time.time()
|
||||
self.proof_packet = proof_packet
|
||||
link.last_proof = self.concluded_at
|
||||
|
||||
if self.callbacks.delivery != None:
|
||||
self.callbacks.delivery(self)
|
||||
try:
|
||||
self.callbacks.delivery(self)
|
||||
except Exception as e:
|
||||
RNS.log("An error occurred while evaluating external delivery callback for "+str(link), RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
@ -488,7 +504,7 @@ class PacketReceipt:
|
|||
|
||||
if self.callbacks.timeout:
|
||||
thread = threading.Thread(target=self.callbacks.timeout, args=(self,))
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,27 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
class Resolver:
|
||||
|
||||
@staticmethod
|
||||
def resolve_identity(full_name):
|
||||
pass
|
353
RNS/Resource.py
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -25,7 +25,9 @@ import os
|
|||
import bz2
|
||||
import math
|
||||
import time
|
||||
import tempfile
|
||||
import threading
|
||||
from threading import Lock
|
||||
from .vendor import umsgpack as umsgpack
|
||||
from time import sleep
|
||||
|
||||
|
@ -42,10 +44,39 @@ class Resource:
|
|||
:param callback: An optional *callable* with the signature *callback(resource)*. Will be called when the resource transfer concludes.
|
||||
:param progress_callback: An optional *callable* with the signature *callback(resource)*. Will be called whenever the resource transfer progress is updated.
|
||||
"""
|
||||
WINDOW_FLEXIBILITY = 4
|
||||
WINDOW_MIN = 1
|
||||
WINDOW_MAX = 10
|
||||
|
||||
# The initial window size at beginning of transfer
|
||||
WINDOW = 4
|
||||
|
||||
# Absolute minimum window size during transfer
|
||||
WINDOW_MIN = 2
|
||||
|
||||
# The maximum window size for transfers on slow links
|
||||
WINDOW_MAX_SLOW = 10
|
||||
|
||||
# The maximum window size for transfers on fast links
|
||||
WINDOW_MAX_FAST = 75
|
||||
|
||||
# For calculating maps and guard segments, this
|
||||
# must be set to the global maximum window.
|
||||
WINDOW_MAX = WINDOW_MAX_FAST
|
||||
|
||||
# If the fast rate is sustained for this many request
|
||||
# rounds, the fast link window size will be allowed.
|
||||
FAST_RATE_THRESHOLD = WINDOW_MAX_SLOW - WINDOW - 2
|
||||
|
||||
# If the RTT rate is higher than this value,
|
||||
# the max window size for fast links will be used.
|
||||
# The default is 50 Kbps (the value is stored in
|
||||
# bytes per second, hence the "/ 8").
|
||||
RATE_FAST = (50*1000) / 8
|
||||
|
||||
# The minimum allowed flexibility of the window size.
|
||||
# The difference between window_max and window_min
|
||||
# will never be smaller than this value.
|
||||
WINDOW_FLEXIBILITY = 4
|
||||
|
||||
# Number of bytes in a map hash
|
||||
MAPHASH_LEN = 4
|
||||
SDU = RNS.Packet.MDU
|
||||
RANDOM_HASH_SIZE = 4
|
||||
|
@ -74,9 +105,12 @@ class Resource:
|
|||
|
||||
PART_TIMEOUT_FACTOR = 4
|
||||
PART_TIMEOUT_FACTOR_AFTER_RTT = 2
|
||||
MAX_RETRIES = 5
|
||||
SENDER_GRACE_TIME = 10
|
||||
MAX_RETRIES = 16
|
||||
MAX_ADV_RETRIES = 4
|
||||
SENDER_GRACE_TIME = 10.0
|
||||
PROCESSING_GRACE = 1.0
|
||||
RETRY_GRACE_TIME = 0.25
|
||||
PER_RETRY_DELAY = 0.5
|
||||
|
||||
WATCHDOG_MAX_SLEEP = 1
|
||||
|
||||
|
@ -118,9 +152,9 @@ class Resource:
|
|||
resource.total_parts = int(math.ceil(resource.size/float(Resource.SDU)))
|
||||
resource.received_count = 0
|
||||
resource.outstanding_parts = 0
|
||||
resource.parts = [None] * resource.total_parts
|
||||
resource.parts = [None] * resource.total_parts
|
||||
resource.window = Resource.WINDOW
|
||||
resource.window_max = Resource.WINDOW_MAX
|
||||
resource.window_max = Resource.WINDOW_MAX_SLOW
|
||||
resource.window_min = Resource.WINDOW_MIN
|
||||
resource.window_flexibility = Resource.WINDOW_FLEXIBILITY
|
||||
resource.last_activity = time.time()
|
||||
|
@ -136,25 +170,29 @@ class Resource:
|
|||
resource.hashmap = [None] * resource.total_parts
|
||||
resource.hashmap_height = 0
|
||||
resource.waiting_for_hmu = False
|
||||
|
||||
resource.receiving_part = False
|
||||
|
||||
resource.consecutive_completed_height = 0
|
||||
resource.consecutive_completed_height = -1
|
||||
|
||||
resource.link.register_incoming_resource(resource)
|
||||
if not resource.link.has_incoming_resource(resource):
|
||||
resource.link.register_incoming_resource(resource)
|
||||
|
||||
RNS.log("Accepting resource advertisement for "+RNS.prettyhexrep(resource.hash), RNS.LOG_DEBUG)
|
||||
if resource.link.callbacks.resource_started != None:
|
||||
try:
|
||||
resource.link.callbacks.resource_started(resource)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing resource started callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
RNS.log(f"Accepting resource advertisement for {RNS.prettyhexrep(resource.hash)}. Transfer size is {RNS.prettysize(resource.size)} in {resource.total_parts} parts.", RNS.LOG_DEBUG)
|
||||
if resource.link.callbacks.resource_started != None:
|
||||
try:
|
||||
resource.link.callbacks.resource_started(resource)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing resource started callback from "+str(resource)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
resource.hashmap_update(0, resource.hashmap_raw)
|
||||
resource.hashmap_update(0, resource.hashmap_raw)
|
||||
|
||||
resource.watchdog_job()
|
||||
resource.watchdog_job()
|
||||
|
||||
return resource
|
||||
|
||||
else:
|
||||
RNS.log("Ignoring resource advertisement for "+RNS.prettyhexrep(resource.hash)+", resource already transferring", RNS.LOG_DEBUG)
|
||||
return None
|
||||
|
||||
return resource
|
||||
except Exception as e:
|
||||
RNS.log("Could not decode resource advertisement, dropping resource", RNS.LOG_DEBUG)
|
||||
return None
|
||||
|
@ -167,9 +205,19 @@ class Resource:
|
|||
resource_data = None
|
||||
self.assembly_lock = False
|
||||
|
||||
if data != None:
|
||||
if not hasattr(data, "read") and len(data) > Resource.MAX_EFFICIENT_SIZE:
|
||||
original_data = data
|
||||
data_size = len(original_data)
|
||||
data = tempfile.TemporaryFile()
|
||||
data.write(original_data)
|
||||
del original_data
|
||||
|
||||
if hasattr(data, "read"):
|
||||
data_size = os.stat(data.name).st_size
|
||||
self.total_size = data_size
|
||||
if data_size == None:
|
||||
data_size = os.stat(data.name).st_size
|
||||
|
||||
self.total_size = data_size
|
||||
self.grand_total_parts = math.ceil(data_size/Resource.SDU)
|
||||
|
||||
if data_size <= Resource.MAX_EFFICIENT_SIZE:
|
||||
|
@ -210,6 +258,7 @@ class Resource:
|
|||
self.status = Resource.NONE
|
||||
self.link = link
|
||||
self.max_retries = Resource.MAX_RETRIES
|
||||
self.max_adv_retries = Resource.MAX_ADV_RETRIES
|
||||
self.retries_left = self.max_retries
|
||||
self.timeout_factor = self.link.traffic_timeout_factor
|
||||
self.part_timeout_factor = Resource.PART_TIMEOUT_FACTOR
|
||||
|
@ -219,6 +268,11 @@ class Resource:
|
|||
self.__watchdog_job_id = 0
|
||||
self.__progress_callback = progress_callback
|
||||
self.rtt = None
|
||||
self.rtt_rxd_bytes = 0
|
||||
self.req_sent = 0
|
||||
self.req_resp_rtt_rate = 0
|
||||
self.rtt_rxd_bytes_at_part_req = 0
|
||||
self.fast_rate_rounds = 0
|
||||
self.request_id = request_id
|
||||
self.is_response = is_response
|
||||
|
||||
|
@ -236,7 +290,7 @@ class Resource:
|
|||
self.uncompressed_data = data
|
||||
|
||||
compression_began = time.time()
|
||||
if (auto_compress and len(self.uncompressed_data) < Resource.AUTO_COMPRESS_MAX_SIZE):
|
||||
if (auto_compress and len(self.uncompressed_data) <= Resource.AUTO_COMPRESS_MAX_SIZE):
|
||||
RNS.log("Compressing resource data...", RNS.LOG_DEBUG)
|
||||
self.compressed_data = bz2.compress(self.uncompressed_data)
|
||||
RNS.log("Compression completed in "+str(round(time.time()-compression_began, 3))+" seconds", RNS.LOG_DEBUG)
|
||||
|
@ -323,7 +377,8 @@ class Resource:
|
|||
if advertise:
|
||||
self.advertise()
|
||||
else:
|
||||
pass
|
||||
self.receive_lock = Lock()
|
||||
|
||||
|
||||
def hashmap_update_packet(self, plaintext):
|
||||
if not self.status == Resource.FAILED:
|
||||
|
@ -356,12 +411,11 @@ class Resource:
|
|||
the resource advertisement it will begin transferring.
|
||||
"""
|
||||
thread = threading.Thread(target=self.__advertise_job)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
def __advertise_job(self):
|
||||
data = ResourceAdvertisement(self).pack()
|
||||
self.advertisement_packet = RNS.Packet(self.link, data, context=RNS.Packet.RESOURCE_ADV)
|
||||
self.advertisement_packet = RNS.Packet(self.link, ResourceAdvertisement(self).pack(), context=RNS.Packet.RESOURCE_ADV)
|
||||
while not self.link.ready_for_new_resource():
|
||||
self.status = Resource.QUEUED
|
||||
sleep(0.25)
|
||||
|
@ -372,6 +426,7 @@ class Resource:
|
|||
self.adv_sent = self.last_activity
|
||||
self.rtt = None
|
||||
self.status = Resource.ADVERTISED
|
||||
self.retries_left = self.max_adv_retries
|
||||
self.link.register_outgoing_resource(self)
|
||||
RNS.log("Sent resource advertisement for "+RNS.prettyhexrep(self.hash), RNS.LOG_DEBUG)
|
||||
except Exception as e:
|
||||
|
@ -383,7 +438,7 @@ class Resource:
|
|||
|
||||
def watchdog_job(self):
|
||||
thread = threading.Thread(target=self.__watchdog_job)
|
||||
thread.setDaemon(True)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
def __watchdog_job(self):
|
||||
|
@ -397,7 +452,7 @@ class Resource:
|
|||
sleep_time = None
|
||||
|
||||
if self.status == Resource.ADVERTISED:
|
||||
sleep_time = (self.adv_sent+self.timeout)-time.time()
|
||||
sleep_time = (self.adv_sent+self.timeout+Resource.PROCESSING_GRACE)-time.time()
|
||||
if sleep_time < 0:
|
||||
if self.retries_left <= 0:
|
||||
RNS.log("Resource transfer timeout after sending advertisement", RNS.LOG_DEBUG)
|
||||
|
@ -407,12 +462,13 @@ class Resource:
|
|||
try:
|
||||
RNS.log("No part requests received, retrying resource advertisement...", RNS.LOG_DEBUG)
|
||||
self.retries_left -= 1
|
||||
self.advertisement_packet.resend()
|
||||
self.advertisement_packet = RNS.Packet(self.link, ResourceAdvertisement(self).pack(), context=RNS.Packet.RESOURCE_ADV)
|
||||
self.advertisement_packet.send()
|
||||
self.last_activity = time.time()
|
||||
self.adv_sent = self.last_activity
|
||||
sleep_time = 0.001
|
||||
except Exception as e:
|
||||
RNS.log("Could not resend advertisement packet, cancelling resource", RNS.LOG_VERBOSE)
|
||||
RNS.log("Could not resend advertisement packet, cancelling resource. The contained exception was: "+str(e), RNS.LOG_VERBOSE)
|
||||
self.cancel()
|
||||
|
||||
|
||||
|
@ -426,11 +482,14 @@ class Resource:
|
|||
|
||||
window_remaining = self.outstanding_parts
|
||||
|
||||
sleep_time = self.last_activity + (rtt*(self.part_timeout_factor+window_remaining)) + Resource.RETRY_GRACE_TIME - time.time()
|
||||
retries_used = self.max_retries - self.retries_left
|
||||
extra_wait = retries_used * Resource.PER_RETRY_DELAY
|
||||
sleep_time = self.last_activity + (rtt*(self.part_timeout_factor+window_remaining)) + Resource.RETRY_GRACE_TIME + extra_wait - time.time()
|
||||
|
||||
if sleep_time < 0:
|
||||
if self.retries_left > 0:
|
||||
RNS.log("Timed out waiting for parts, requesting retry", RNS.LOG_DEBUG)
|
||||
ms = "" if self.outstanding_parts == 1 else "s"
|
||||
RNS.log("Timed out waiting for "+str(self.outstanding_parts)+" part"+ms+", requesting retry", RNS.LOG_DEBUG)
|
||||
if self.window > self.window_min:
|
||||
self.window -= 1
|
||||
if self.window_max > self.window_min:
|
||||
|
@ -446,7 +505,8 @@ class Resource:
|
|||
self.cancel()
|
||||
sleep_time = 0.001
|
||||
else:
|
||||
max_wait = self.rtt * self.timeout_factor * self.max_retries + self.sender_grace_time
|
||||
max_extra_wait = sum([(r+1) * Resource.PER_RETRY_DELAY for r in range(self.MAX_RETRIES)])
|
||||
max_wait = self.rtt * self.timeout_factor * self.max_retries + self.sender_grace_time + max_extra_wait
|
||||
sleep_time = self.last_activity + max_wait - time.time()
|
||||
if sleep_time < 0:
|
||||
RNS.log("Resource timed out waiting for part requests", RNS.LOG_DEBUG)
|
||||
|
@ -564,10 +624,26 @@ class Resource:
|
|||
self.callback(self)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing resource concluded callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
finally:
|
||||
try:
|
||||
if hasattr(self, "input_file"):
|
||||
if hasattr(self.input_file, "close") and callable(self.input_file.close):
|
||||
self.input_file.close()
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Error while closing resource input file: "+str(e), RNS.LOG_ERROR)
|
||||
else:
|
||||
# Otherwise we'll recursively create the
|
||||
# next segment of the resource
|
||||
Resource(self.input_file, self.link, callback = self.callback, segment_index = self.segment_index+1, original_hash=self.original_hash, progress_callback = self.__progress_callback)
|
||||
Resource(
|
||||
self.input_file, self.link,
|
||||
callback = self.callback,
|
||||
segment_index = self.segment_index+1,
|
||||
original_hash=self.original_hash,
|
||||
progress_callback = self.__progress_callback,
|
||||
request_id = self.request_id,
|
||||
is_response = self.is_response,
|
||||
)
|
||||
else:
|
||||
pass
|
||||
else:
|
||||
|
@ -575,73 +651,99 @@ class Resource:
|
|||
|
||||
|
||||
def receive_part(self, packet):
|
||||
while self.receiving_part:
|
||||
sleep(0.001)
|
||||
with self.receive_lock:
|
||||
|
||||
self.receiving_part = True
|
||||
self.last_activity = time.time()
|
||||
self.retries_left = self.max_retries
|
||||
self.receiving_part = True
|
||||
self.last_activity = time.time()
|
||||
self.retries_left = self.max_retries
|
||||
|
||||
if self.req_resp == None:
|
||||
self.req_resp = self.last_activity
|
||||
rtt = self.req_resp-self.req_sent
|
||||
if self.req_resp == None:
|
||||
self.req_resp = self.last_activity
|
||||
rtt = self.req_resp-self.req_sent
|
||||
|
||||
self.part_timeout_factor = Resource.PART_TIMEOUT_FACTOR_AFTER_RTT
|
||||
if self.rtt == None:
|
||||
self.rtt = self.link.rtt
|
||||
self.watchdog_job()
|
||||
elif rtt < self.rtt:
|
||||
self.rtt = max(self.rtt - self.rtt*0.05, rtt)
|
||||
elif rtt > self.rtt:
|
||||
self.rtt = min(self.rtt + self.rtt*0.05, rtt)
|
||||
|
||||
self.part_timeout_factor = Resource.PART_TIMEOUT_FACTOR_AFTER_RTT
|
||||
if self.rtt == None:
|
||||
self.rtt = self.link.rtt
|
||||
self.watchdog_job()
|
||||
elif rtt < self.rtt:
|
||||
self.rtt = max(self.rtt - self.rtt*0.05, rtt)
|
||||
elif rtt > self.rtt:
|
||||
self.rtt = min(self.rtt + self.rtt*0.05, rtt)
|
||||
if rtt > 0:
|
||||
req_resp_cost = len(packet.raw)+self.req_sent_bytes
|
||||
self.req_resp_rtt_rate = req_resp_cost / rtt
|
||||
|
||||
if not self.status == Resource.FAILED:
|
||||
self.status = Resource.TRANSFERRING
|
||||
part_data = packet.data
|
||||
part_hash = self.get_map_hash(part_data)
|
||||
if self.req_resp_rtt_rate > Resource.RATE_FAST and self.fast_rate_rounds < Resource.FAST_RATE_THRESHOLD:
|
||||
self.fast_rate_rounds += 1
|
||||
|
||||
i = self.consecutive_completed_height
|
||||
for map_hash in self.hashmap[self.consecutive_completed_height:self.consecutive_completed_height+self.window]:
|
||||
if map_hash == part_hash:
|
||||
if self.parts[i] == None:
|
||||
# Insert data into parts list
|
||||
self.parts[i] = part_data
|
||||
self.received_count += 1
|
||||
self.outstanding_parts -= 1
|
||||
if self.fast_rate_rounds == Resource.FAST_RATE_THRESHOLD:
|
||||
self.window_max = Resource.WINDOW_MAX_FAST
|
||||
|
||||
# Update consecutive completed pointer
|
||||
if i == self.consecutive_completed_height + 1:
|
||||
self.consecutive_completed_height = i
|
||||
|
||||
cp = self.consecutive_completed_height + 1
|
||||
while cp < len(self.parts) and self.parts[cp] != None:
|
||||
self.consecutive_completed_height = cp
|
||||
cp += 1
|
||||
if not self.status == Resource.FAILED:
|
||||
self.status = Resource.TRANSFERRING
|
||||
part_data = packet.data
|
||||
part_hash = self.get_map_hash(part_data)
|
||||
|
||||
if self.__progress_callback != None:
|
||||
try:
|
||||
self.__progress_callback(self)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing progress callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
consecutive_index = self.consecutive_completed_height if self.consecutive_completed_height >= 0 else 0
|
||||
i = consecutive_index
|
||||
for map_hash in self.hashmap[consecutive_index:consecutive_index+self.window]:
|
||||
if map_hash == part_hash:
|
||||
if self.parts[i] == None:
|
||||
|
||||
i += 1
|
||||
# Insert data into parts list
|
||||
self.parts[i] = part_data
|
||||
self.rtt_rxd_bytes += len(part_data)
|
||||
self.received_count += 1
|
||||
self.outstanding_parts -= 1
|
||||
|
||||
self.receiving_part = False
|
||||
# Update consecutive completed pointer
|
||||
if i == self.consecutive_completed_height + 1:
|
||||
self.consecutive_completed_height = i
|
||||
|
||||
cp = self.consecutive_completed_height + 1
|
||||
while cp < len(self.parts) and self.parts[cp] != None:
|
||||
self.consecutive_completed_height = cp
|
||||
cp += 1
|
||||
|
||||
if self.received_count == self.total_parts and not self.assembly_lock:
|
||||
self.assembly_lock = True
|
||||
self.assemble()
|
||||
elif self.outstanding_parts == 0:
|
||||
# TODO: Figure out if there is a mathematically
|
||||
# optimal way to adjust windows
|
||||
if self.window < self.window_max:
|
||||
self.window += 1
|
||||
if (self.window - self.window_min) > (self.window_flexibility-1):
|
||||
self.window_min += 1
|
||||
if self.__progress_callback != None:
|
||||
try:
|
||||
self.__progress_callback(self)
|
||||
except Exception as e:
|
||||
RNS.log("Error while executing progress callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
self.request_next()
|
||||
else:
|
||||
self.receiving_part = False
|
||||
i += 1
|
||||
|
||||
self.receiving_part = False
|
||||
|
||||
if self.received_count == self.total_parts and not self.assembly_lock:
|
||||
self.assembly_lock = True
|
||||
self.assemble()
|
||||
elif self.outstanding_parts == 0:
|
||||
# TODO: Figure out if there is a mathematically
|
||||
# optimal way to adjust windows
|
||||
if self.window < self.window_max:
|
||||
self.window += 1
|
||||
if (self.window - self.window_min) > (self.window_flexibility-1):
|
||||
self.window_min += 1
|
||||
|
||||
if self.req_sent != 0:
|
||||
rtt = time.time()-self.req_sent
|
||||
req_transferred = self.rtt_rxd_bytes - self.rtt_rxd_bytes_at_part_req
|
||||
|
||||
if rtt != 0:
|
||||
self.req_data_rtt_rate = req_transferred/rtt
|
||||
self.rtt_rxd_bytes_at_part_req = self.rtt_rxd_bytes
|
||||
|
||||
if self.req_data_rtt_rate > Resource.RATE_FAST and self.fast_rate_rounds < Resource.FAST_RATE_THRESHOLD:
|
||||
self.fast_rate_rounds += 1
|
||||
|
||||
if self.fast_rate_rounds == Resource.FAST_RATE_THRESHOLD:
|
||||
self.window_max = Resource.WINDOW_MAX_FAST
|
||||
|
||||
self.request_next()
|
||||
else:
|
||||
self.receiving_part = False
|
||||
|
||||
# Called on incoming resource to send a request for more data
|
||||
def request_next(self):
|
||||
|
@ -654,11 +756,11 @@ class Resource:
|
|||
hashmap_exhausted = Resource.HASHMAP_IS_NOT_EXHAUSTED
|
||||
requested_hashes = b""
|
||||
|
||||
offset = (1 if self.consecutive_completed_height > 0 else 0)
|
||||
i = 0; pn = self.consecutive_completed_height+offset
|
||||
i = 0; pn = self.consecutive_completed_height+1
|
||||
search_start = pn
|
||||
|
||||
for part in self.parts[search_start:search_start+self.window]:
|
||||
search_size = self.window
|
||||
|
||||
for part in self.parts[search_start:search_start+search_size]:
|
||||
if part == None:
|
||||
part_hash = self.hashmap[pn]
|
||||
if part_hash != None:
|
||||
|
@ -678,7 +780,6 @@ class Resource:
|
|||
hmu_part += last_map_hash
|
||||
self.waiting_for_hmu = True
|
||||
|
||||
requested_data = b""
|
||||
request_data = hmu_part + self.hash + requested_hashes
|
||||
request_packet = RNS.Packet(self.link, request_data, context = RNS.Packet.RESOURCE_REQ)
|
||||
|
||||
|
@ -686,7 +787,9 @@ class Resource:
|
|||
request_packet.send()
|
||||
self.last_activity = time.time()
|
||||
self.req_sent = self.last_activity
|
||||
self.req_sent_bytes = len(request_packet.raw)
|
||||
self.req_resp = None
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Could not send resource request packet, cancelling resource", RNS.LOG_DEBUG)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_DEBUG)
|
||||
|
@ -710,27 +813,34 @@ class Resource:
|
|||
|
||||
requested_hashes = request_data[pad+RNS.Identity.HASHLENGTH//8:]
|
||||
|
||||
for i in range(0,len(requested_hashes)//Resource.MAPHASH_LEN):
|
||||
requested_hash = requested_hashes[i*Resource.MAPHASH_LEN:(i+1)*Resource.MAPHASH_LEN]
|
||||
|
||||
search_start = self.receiver_min_consecutive_height
|
||||
search_end = self.receiver_min_consecutive_height+ResourceAdvertisement.COLLISION_GUARD_SIZE
|
||||
for part in self.parts[search_start:search_end]:
|
||||
if part.map_hash == requested_hash:
|
||||
try:
|
||||
if not part.sent:
|
||||
part.send()
|
||||
self.sent_parts += 1
|
||||
else:
|
||||
part.resend()
|
||||
self.last_activity = time.time()
|
||||
self.last_part_sent = self.last_activity
|
||||
break
|
||||
except Exception as e:
|
||||
RNS.log("Resource could not send parts, cancelling transfer!", RNS.LOG_DEBUG)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_DEBUG)
|
||||
self.cancel()
|
||||
# Define the search scope
|
||||
search_start = self.receiver_min_consecutive_height
|
||||
search_end = self.receiver_min_consecutive_height+ResourceAdvertisement.COLLISION_GUARD_SIZE
|
||||
|
||||
map_hashes = []
|
||||
for i in range(0,len(requested_hashes)//Resource.MAPHASH_LEN):
|
||||
map_hash = requested_hashes[i*Resource.MAPHASH_LEN:(i+1)*Resource.MAPHASH_LEN]
|
||||
map_hashes.append(map_hash)
|
||||
|
||||
search_scope = self.parts[search_start:search_end]
|
||||
requested_parts = list(filter(lambda part: part.map_hash in map_hashes, search_scope))
|
||||
|
||||
for part in requested_parts:
|
||||
try:
|
||||
if not part.sent:
|
||||
part.send()
|
||||
self.sent_parts += 1
|
||||
else:
|
||||
part.resend()
|
||||
|
||||
self.last_activity = time.time()
|
||||
self.last_part_sent = self.last_activity
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Resource could not send parts, cancelling transfer!", RNS.LOG_DEBUG)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_DEBUG)
|
||||
self.cancel()
|
||||
|
||||
if wants_more_hashmap:
|
||||
last_map_hash = request_data[1:Resource.MAPHASH_LEN+1]
|
||||
|
||||
|
@ -825,8 +935,7 @@ class Resource:
|
|||
else:
|
||||
self.progress_total_parts = float(self.total_parts)
|
||||
|
||||
|
||||
progress = self.processed_parts / self.progress_total_parts
|
||||
progress = min(1.0, self.processed_parts / self.progress_total_parts)
|
||||
return progress
|
||||
|
||||
def get_transfer_size(self):
|
||||
|
@ -870,7 +979,7 @@ class Resource:
|
|||
|
||||
|
||||
class ResourceAdvertisement:
|
||||
OVERHEAD = 128
|
||||
OVERHEAD = 134
|
||||
HASHMAP_MAX_LEN = math.floor((RNS.Link.MDU-OVERHEAD)/Resource.MAPHASH_LEN)
|
||||
COLLISION_GUARD_SIZE = 2*Resource.WINDOW_MAX+HASHMAP_MAX_LEN
|
||||
|
||||
|
@ -914,6 +1023,7 @@ class ResourceAdvertisement:
|
|||
|
||||
|
||||
def __init__(self, resource=None, request_id=None, is_response=False):
|
||||
self.link = None
|
||||
if resource != None:
|
||||
self.t = resource.size # Transfer size
|
||||
self.d = resource.total_size # Total uncompressed data size
|
||||
|
@ -960,6 +1070,9 @@ class ResourceAdvertisement:
|
|||
def is_compressed(self):
|
||||
return self.c
|
||||
|
||||
def get_link(self):
|
||||
return self.link
|
||||
|
||||
def pack(self, segment=0):
|
||||
hashmap_start = segment*ResourceAdvertisement.HASHMAP_MAX_LEN
|
||||
hashmap_end = min((segment+1)*(ResourceAdvertisement.HASHMAP_MAX_LEN), self.n)
|
||||
|
|
1212
RNS/Reticulum.py
1054
RNS/Transport.py
|
@ -24,6 +24,7 @@
|
|||
|
||||
import RNS
|
||||
import argparse
|
||||
import threading
|
||||
import time
|
||||
import sys
|
||||
import os
|
||||
|
@ -34,9 +35,12 @@ APP_NAME = "rncp"
|
|||
allow_all = False
|
||||
allowed_identity_hashes = []
|
||||
|
||||
def receive(configdir, verbosity = 0, quietness = 0, allowed = [], display_identity = False, limit = None, disable_auth = None,disable_announce=False):
|
||||
def listen(configdir, verbosity = 0, quietness = 0, allowed = [], display_identity = False, limit = None, disable_auth = None, announce = False):
|
||||
global allow_all, allowed_identity_hashes
|
||||
from tempfile import TemporaryFile
|
||||
identity = None
|
||||
if announce < 0:
|
||||
announce = False
|
||||
|
||||
targetloglevel = 3+verbosity-quietness
|
||||
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
|
||||
|
@ -54,16 +58,48 @@ def receive(configdir, verbosity = 0, quietness = 0, allowed = [], display_ident
|
|||
|
||||
if display_identity:
|
||||
print("Identity : "+str(identity))
|
||||
print("Receiving on : "+RNS.prettyhexrep(destination.hash))
|
||||
print("Listening on : "+RNS.prettyhexrep(destination.hash))
|
||||
exit(0)
|
||||
|
||||
if disable_auth:
|
||||
allow_all = True
|
||||
else:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
try:
|
||||
allowed_file_name = "allowed_identities"
|
||||
allowed_file = None
|
||||
if os.path.isfile(os.path.expanduser("/etc/rncp/"+allowed_file_name)):
|
||||
allowed_file = os.path.expanduser("/etc/rncp/"+allowed_file_name)
|
||||
elif os.path.isfile(os.path.expanduser("~/.config/rncp/"+allowed_file_name)):
|
||||
allowed_file = os.path.expanduser("~/.config/rncp/"+allowed_file_name)
|
||||
elif os.path.isfile(os.path.expanduser("~/.rncp/"+allowed_file_name)):
|
||||
allowed_file = os.path.expanduser("~/.rncp/"+allowed_file_name)
|
||||
if allowed_file != None:
|
||||
af = open(allowed_file, "r")
|
||||
al = af.read().replace("\r", "").split("\n")
|
||||
ali = []
|
||||
for a in al:
|
||||
if len(a) == dest_len:
|
||||
ali.append(a)
|
||||
|
||||
if len(ali) > 0:
|
||||
if not allowed:
|
||||
allowed = ali
|
||||
else:
|
||||
allowed.extend(ali)
|
||||
if len(ali) == 1:
|
||||
ms = "y"
|
||||
else:
|
||||
ms = "ies"
|
||||
|
||||
RNS.log("Loaded "+str(len(ali))+" allowed identit"+ms+" from "+str(allowed_file), RNS.LOG_VERBOSE)
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Error while parsing allowed_identities file. The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
if allowed != None:
|
||||
for a in allowed:
|
||||
try:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(a) != dest_len:
|
||||
raise ValueError("Allowed destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
|
||||
try:
|
||||
|
@ -78,16 +114,60 @@ def receive(configdir, verbosity = 0, quietness = 0, allowed = [], display_ident
|
|||
if len(allowed_identity_hashes) < 1 and not disable_auth:
|
||||
print("Warning: No allowed identities configured, rncp will not accept any files!")
|
||||
|
||||
destination.set_link_established_callback(receive_link_established)
|
||||
print("rncp ready to receive on "+RNS.prettyhexrep(destination.hash))
|
||||
def fetch_request(path, data, request_id, link_id, remote_identity, requested_at):
|
||||
target_link = None
|
||||
for link in RNS.Transport.active_links:
|
||||
if link.link_id == link_id:
|
||||
target_link = link
|
||||
|
||||
if not disable_announce:
|
||||
destination.announce()
|
||||
file_path = os.path.expanduser(data)
|
||||
if not os.path.isfile(file_path):
|
||||
RNS.log("Client-requested file not found: "+str(file_path), RNS.LOG_VERBOSE)
|
||||
return False
|
||||
else:
|
||||
if target_link != None:
|
||||
RNS.log("Sending file "+str(file_path)+" to client", RNS.LOG_VERBOSE)
|
||||
|
||||
temp_file = TemporaryFile()
|
||||
real_file = open(file_path, "rb")
|
||||
filename_bytes = os.path.basename(file_path).encode("utf-8")
|
||||
filename_len = len(filename_bytes)
|
||||
|
||||
if filename_len > 0xFFFF:
|
||||
print("Filename exceeds max size, cannot send")
|
||||
exit(1)
|
||||
else:
|
||||
print("Preparing file...", end=" ")
|
||||
|
||||
temp_file.write(filename_len.to_bytes(2, "big"))
|
||||
temp_file.write(filename_bytes)
|
||||
temp_file.write(real_file.read())
|
||||
temp_file.seek(0)
|
||||
|
||||
fetch_resource = RNS.Resource(temp_file, target_link)
|
||||
return True
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
destination.set_link_established_callback(client_link_established)
|
||||
destination.register_request_handler("fetch_file", response_generator=fetch_request, allow=RNS.Destination.ALLOW_LIST, allowed_list=allowed_identity_hashes)
|
||||
print("rncp listening on "+RNS.prettyhexrep(destination.hash))
|
||||
|
||||
if announce >= 0:
|
||||
def job():
|
||||
destination.announce()
|
||||
if announce > 0:
|
||||
while True:
|
||||
time.sleep(announce)
|
||||
destination.announce()
|
||||
|
||||
threading.Thread(target=job, daemon=True).start()
|
||||
|
||||
while True:
|
||||
time.sleep(1)
|
||||
|
||||
def receive_link_established(link):
|
||||
def client_link_established(link):
|
||||
RNS.log("Incoming link established", RNS.LOG_VERBOSE)
|
||||
link.set_remote_identified_callback(receive_sender_identified)
|
||||
link.set_resource_strategy(RNS.Link.ACCEPT_APP)
|
||||
|
@ -96,21 +176,29 @@ def receive_link_established(link):
|
|||
link.set_resource_concluded_callback(receive_resource_concluded)
|
||||
|
||||
def receive_sender_identified(link, identity):
|
||||
global allow_all
|
||||
|
||||
if identity.hash in allowed_identity_hashes:
|
||||
RNS.log("Authenticated sender", RNS.LOG_VERBOSE)
|
||||
else:
|
||||
RNS.log("Sender not allowed, tearing down link", RNS.LOG_VERBOSE)
|
||||
link.teardown()
|
||||
if not allow_all:
|
||||
RNS.log("Sender not allowed, tearing down link", RNS.LOG_VERBOSE)
|
||||
link.teardown()
|
||||
else:
|
||||
pass
|
||||
|
||||
def receive_resource_callback(resource):
|
||||
global allow_all
|
||||
|
||||
sender_identity = resource.link.get_remote_identity()
|
||||
|
||||
if sender_identity != None:
|
||||
if sender_identity.hash in allowed_identity_hashes:
|
||||
print("Allowing sender")
|
||||
return True
|
||||
|
||||
print("Rejecting sender")
|
||||
if allow_all:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def receive_resource_started(resource):
|
||||
|
@ -173,7 +261,221 @@ def sender_progress(resource):
|
|||
resource_done = True
|
||||
|
||||
link = None
|
||||
def send(configdir, verbosity = 0, quietness = 0, destination = None, file = None, timeout = RNS.Transport.PATH_REQUEST_TIMEOUT):
|
||||
def fetch(configdir, verbosity = 0, quietness = 0, destination = None, file = None, timeout = RNS.Transport.PATH_REQUEST_TIMEOUT, silent=False):
|
||||
global current_resource, resource_done, link, speed
|
||||
targetloglevel = 3+verbosity-quietness
|
||||
|
||||
try:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination) != dest_len:
|
||||
raise ValueError("Allowed destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
|
||||
try:
|
||||
destination_hash = bytes.fromhex(destination)
|
||||
except Exception as e:
|
||||
raise ValueError("Invalid destination entered. Check your input.")
|
||||
except Exception as e:
|
||||
print(str(e))
|
||||
exit(1)
|
||||
|
||||
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
|
||||
|
||||
identity_path = RNS.Reticulum.identitypath+"/"+APP_NAME
|
||||
if os.path.isfile(identity_path):
|
||||
identity = RNS.Identity.from_file(identity_path)
|
||||
if identity == None:
|
||||
RNS.log("Could not load identity for rncp. The identity file at \""+str(identity_path)+"\" may be corrupt or unreadable.", RNS.LOG_ERROR)
|
||||
exit(2)
|
||||
else:
|
||||
identity = None
|
||||
|
||||
if identity == None:
|
||||
RNS.log("No valid saved identity found, creating new...", RNS.LOG_INFO)
|
||||
identity = RNS.Identity()
|
||||
identity.to_file(identity_path)
|
||||
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
if silent:
|
||||
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested")
|
||||
else:
|
||||
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested ", end=" ")
|
||||
sys.stdout.flush()
|
||||
|
||||
i = 0
|
||||
syms = "⢄⢂⢁⡁⡈⡐⡠"
|
||||
estab_timeout = time.time()+timeout
|
||||
while not RNS.Transport.has_path(destination_hash) and time.time() < estab_timeout:
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
if silent:
|
||||
print("Path not found")
|
||||
else:
|
||||
print("\r \rPath not found")
|
||||
exit(1)
|
||||
else:
|
||||
if silent:
|
||||
print("Establishing link with "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("\r \rEstablishing link with "+RNS.prettyhexrep(destination_hash)+" ", end=" ")
|
||||
|
||||
listener_identity = RNS.Identity.recall(destination_hash)
|
||||
listener_destination = RNS.Destination(
|
||||
listener_identity,
|
||||
RNS.Destination.OUT,
|
||||
RNS.Destination.SINGLE,
|
||||
APP_NAME,
|
||||
"receive"
|
||||
)
|
||||
|
||||
link = RNS.Link(listener_destination)
|
||||
while link.status != RNS.Link.ACTIVE and time.time() < estab_timeout:
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
if silent:
|
||||
print("Could not establish link with "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("\r \rCould not establish link with "+RNS.prettyhexrep(destination_hash))
|
||||
exit(1)
|
||||
else:
|
||||
if silent:
|
||||
print("Requesting file from remote...")
|
||||
else:
|
||||
print("\r \rRequesting file from remote ", end=" ")
|
||||
|
||||
link.identify(identity)
|
||||
|
||||
request_resolved = False
|
||||
request_status = "unknown"
|
||||
resource_resolved = False
|
||||
resource_status = "unrequested"
|
||||
current_resource = None
|
||||
def request_response(request_receipt):
|
||||
nonlocal request_resolved, request_status
|
||||
if request_receipt.response == False:
|
||||
request_status = "not_found"
|
||||
elif request_receipt.response == None:
|
||||
request_status = "remote_error"
|
||||
else:
|
||||
request_status = "found"
|
||||
|
||||
request_resolved = True
|
||||
|
||||
def request_failed(request_receipt):
|
||||
nonlocal request_resolved, request_status
|
||||
request_status = "unknown"
|
||||
request_resolved = True
|
||||
|
||||
def fetch_resource_started(resource):
|
||||
nonlocal resource_status
|
||||
current_resource = resource
|
||||
current_resource.progress_callback(sender_progress)
|
||||
resource_status = "started"
|
||||
|
||||
def fetch_resource_concluded(resource):
|
||||
nonlocal resource_resolved, resource_status
|
||||
if resource.status == RNS.Resource.COMPLETE:
|
||||
if resource.total_size > 4:
|
||||
filename_len = int.from_bytes(resource.data.read(2), "big")
|
||||
filename = resource.data.read(filename_len).decode("utf-8")
|
||||
|
||||
counter = 0
|
||||
saved_filename = filename
|
||||
while os.path.isfile(saved_filename):
|
||||
counter += 1
|
||||
saved_filename = filename+"."+str(counter)
|
||||
|
||||
file = open(saved_filename, "wb")
|
||||
file.write(resource.data.read())
|
||||
file.close()
|
||||
resource_status = "completed"
|
||||
|
||||
else:
|
||||
print("Invalid data received, ignoring resource")
|
||||
resource_status = "invalid_data"
|
||||
|
||||
else:
|
||||
print("Resource failed")
|
||||
resource_status = "failed"
|
||||
|
||||
resource_resolved = True
|
||||
|
||||
link.set_resource_strategy(RNS.Link.ACCEPT_ALL)
|
||||
link.set_resource_started_callback(fetch_resource_started)
|
||||
link.set_resource_concluded_callback(fetch_resource_concluded)
|
||||
link.request("fetch_file", data=file, response_callback=request_response, failed_callback=request_failed)
|
||||
|
||||
syms = "⢄⢂⢁⡁⡈⡐⡠"
|
||||
while not request_resolved:
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if request_status == "not_found":
|
||||
if not silent: print("\r \r", end="")
|
||||
print("Fetch request failed, the file "+str(file)+" was not found on the remote")
|
||||
link.teardown()
|
||||
time.sleep(1)
|
||||
exit(0)
|
||||
elif request_status == "remote_error":
|
||||
if not silent: print("\r \r", end="")
|
||||
print("Fetch request failed due to an error on the remote system")
|
||||
link.teardown()
|
||||
time.sleep(1)
|
||||
exit(0)
|
||||
elif request_status == "unknown":
|
||||
if not silent: print("\r \r", end="")
|
||||
print("Fetch request failed due to an unknown error (probably not authorised)")
|
||||
link.teardown()
|
||||
time.sleep(1)
|
||||
exit(0)
|
||||
elif request_status == "found":
|
||||
if not silent: print("\r \r", end="")
|
||||
|
||||
while not resource_resolved:
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
if current_resource:
|
||||
prg = current_resource.get_progress()
|
||||
percent = round(prg * 100.0, 1)
|
||||
stat_str = str(percent)+"% - " + size_str(int(prg*current_resource.total_size)) + " of " + size_str(current_resource.total_size) + " - " +size_str(speed, "b")+"ps"
|
||||
print("\r \rTransferring file "+syms[i]+" "+stat_str, end=" ")
|
||||
else:
|
||||
print("\r \rWaiting for transfer to start "+syms[i]+" ", end=" ")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if current_resource.status != RNS.Resource.COMPLETE:
|
||||
if silent:
|
||||
print("The transfer failed")
|
||||
else:
|
||||
print("\r \rThe transfer failed")
|
||||
exit(1)
|
||||
else:
|
||||
if silent:
|
||||
print(str(file_path)+" copied to "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("\r \r"+str(file)+" fetched from "+RNS.prettyhexrep(destination_hash))
|
||||
link.teardown()
|
||||
time.sleep(0.25)
|
||||
exit(0)
|
||||
|
||||
link.teardown()
|
||||
exit(0)
|
||||
|
||||
|
||||
def send(configdir, verbosity = 0, quietness = 0, destination = None, file = None, timeout = RNS.Transport.PATH_REQUEST_TIMEOUT, silent=False):
|
||||
global current_resource, resource_done, link, speed
|
||||
from tempfile import TemporaryFile
|
||||
targetloglevel = 3+verbosity-quietness
|
||||
|
@ -212,13 +514,18 @@ def send(configdir, verbosity = 0, quietness = 0, destination = None, file = Non
|
|||
temp_file.write(real_file.read())
|
||||
temp_file.seek(0)
|
||||
|
||||
print("\r \r", end="")
|
||||
print("\r \r", end="")
|
||||
|
||||
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
|
||||
|
||||
identity_path = RNS.Reticulum.identitypath+"/"+APP_NAME
|
||||
if os.path.isfile(identity_path):
|
||||
identity = RNS.Identity.from_file(identity_path)
|
||||
identity = RNS.Identity.from_file(identity_path)
|
||||
if identity == None:
|
||||
RNS.log("Could not load identity for rncp. The identity file at \""+str(identity_path)+"\" may be corrupt or unreadable.", RNS.LOG_ERROR)
|
||||
exit(2)
|
||||
else:
|
||||
identity = None
|
||||
|
||||
if identity == None:
|
||||
RNS.log("No valid saved identity found, creating new...", RNS.LOG_INFO)
|
||||
|
@ -227,23 +534,33 @@ def send(configdir, verbosity = 0, quietness = 0, destination = None, file = Non
|
|||
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested ", end=" ")
|
||||
if silent:
|
||||
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested")
|
||||
else:
|
||||
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested ", end=" ")
|
||||
sys.stdout.flush()
|
||||
|
||||
i = 0
|
||||
syms = "⢄⢂⢁⡁⡈⡐⡠"
|
||||
estab_timeout = time.time()+timeout
|
||||
while not RNS.Transport.has_path(destination_hash) and time.time() < estab_timeout:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
print("\r \rPath not found")
|
||||
if silent:
|
||||
print("Path not found")
|
||||
else:
|
||||
print("\r \rPath not found")
|
||||
exit(1)
|
||||
else:
|
||||
print("\r \rEstablishing link with "+RNS.prettyhexrep(destination_hash)+" ", end=" ")
|
||||
if silent:
|
||||
print("Establishing link with "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("\r \rEstablishing link with "+RNS.prettyhexrep(destination_hash)+" ", end=" ")
|
||||
|
||||
receiver_identity = RNS.Identity.recall(destination_hash)
|
||||
receiver_destination = RNS.Destination(
|
||||
|
@ -256,48 +573,75 @@ def send(configdir, verbosity = 0, quietness = 0, destination = None, file = Non
|
|||
|
||||
link = RNS.Link(receiver_destination)
|
||||
while link.status != RNS.Link.ACTIVE and time.time() < estab_timeout:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if not RNS.Transport.has_path(destination_hash):
|
||||
print("\r \rCould not establish link with "+RNS.prettyhexrep(destination_hash))
|
||||
if time.time() > estab_timeout:
|
||||
if silent:
|
||||
print("Link establishment with "+RNS.prettyhexrep(destination_hash)+" timed out")
|
||||
else:
|
||||
print("\r \rLink establishment with "+RNS.prettyhexrep(destination_hash)+" timed out")
|
||||
exit(1)
|
||||
elif not RNS.Transport.has_path(destination_hash):
|
||||
if silent:
|
||||
print("No path found to "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("\r \rNo path found to "+RNS.prettyhexrep(destination_hash))
|
||||
exit(1)
|
||||
else:
|
||||
print("\r \rAdvertising file resource ", end=" ")
|
||||
if silent:
|
||||
print("Advertising file resource...")
|
||||
else:
|
||||
print("\r \rAdvertising file resource ", end=" ")
|
||||
|
||||
link.identify(identity)
|
||||
resource = RNS.Resource(temp_file, link, callback = sender_progress, progress_callback = sender_progress)
|
||||
current_resource = resource
|
||||
|
||||
while resource.status < RNS.Resource.TRANSFERRING:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
|
||||
if resource.status > RNS.Resource.COMPLETE:
|
||||
print("\r \rFile was not accepted by "+RNS.prettyhexrep(destination_hash))
|
||||
if silent:
|
||||
print("File was not accepted by "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("\r \rFile was not accepted by "+RNS.prettyhexrep(destination_hash))
|
||||
exit(1)
|
||||
else:
|
||||
print("\r \rTransferring file ", end=" ")
|
||||
if silent:
|
||||
print("Transferring file...")
|
||||
else:
|
||||
print("\r \rTransferring file ", end=" ")
|
||||
|
||||
while not resource_done:
|
||||
time.sleep(0.1)
|
||||
prg = current_resource.get_progress()
|
||||
percent = round(prg * 100.0, 1)
|
||||
stat_str = str(percent)+"% - " + size_str(int(prg*current_resource.total_size)) + " of " + size_str(current_resource.total_size) + " - " +size_str(speed, "b")+"ps"
|
||||
print("\r \rTransferring file "+syms[i]+" "+stat_str, end=" ")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
if not silent:
|
||||
time.sleep(0.1)
|
||||
prg = current_resource.get_progress()
|
||||
percent = round(prg * 100.0, 1)
|
||||
stat_str = str(percent)+"% - " + size_str(int(prg*current_resource.total_size)) + " of " + size_str(current_resource.total_size) + " - " +size_str(speed, "b")+"ps"
|
||||
print("\r \rTransferring file "+syms[i]+" "+stat_str, end=" ")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if current_resource.status != RNS.Resource.COMPLETE:
|
||||
print("\r \rThe transfer failed")
|
||||
if silent:
|
||||
print("The transfer failed")
|
||||
else:
|
||||
print("\r \rThe transfer failed")
|
||||
exit(1)
|
||||
else:
|
||||
print("\r \r"+str(file_path)+" copied to "+RNS.prettyhexrep(destination_hash))
|
||||
if silent:
|
||||
print(str(file_path)+" copied to "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("\r \r"+str(file_path)+" copied to "+RNS.prettyhexrep(destination_hash))
|
||||
link.teardown()
|
||||
time.sleep(0.25)
|
||||
real_file.close()
|
||||
|
@ -312,19 +656,21 @@ def main():
|
|||
parser.add_argument("--config", metavar="path", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
|
||||
parser.add_argument('-v', '--verbose', action='count', default=0, help="increase verbosity")
|
||||
parser.add_argument('-q', '--quiet', action='count', default=0, help="decrease verbosity")
|
||||
parser.add_argument('-p', '--print-identity', action='store_true', default=False, help="print identity and destination info and exit")
|
||||
parser.add_argument("-r", '--receive', action='store_true', default=False, help="wait for incoming files")
|
||||
parser.add_argument("-b", '--no-announce', action='store_true', default=False, help="don't announce at program start")
|
||||
parser.add_argument("-S", '--silent', action='store_true', default=False, help="disable transfer progress output")
|
||||
parser.add_argument("-l", '--listen', action='store_true', default=False, help="listen for incoming transfer requests")
|
||||
parser.add_argument("-f", '--fetch', action='store_true', default=False, help="fetch file from remote listener instead of sending")
|
||||
parser.add_argument("-b", action='store', metavar="seconds", default=-1, help="announce interval, 0 to only announce at startup", type=int)
|
||||
parser.add_argument('-a', metavar="allowed_hash", dest="allowed", action='append', help="accept from this identity", type=str)
|
||||
parser.add_argument('-n', '--no-auth', action='store_true', default=False, help="accept files from anyone")
|
||||
parser.add_argument('-p', '--print-identity', action='store_true', default=False, help="print identity and destination info and exit")
|
||||
parser.add_argument("-w", action="store", metavar="seconds", type=float, help="sender timeout before giving up", default=RNS.Transport.PATH_REQUEST_TIMEOUT)
|
||||
# parser.add_argument("--limit", action="store", metavar="files", type=float, help="maximum number of files to accept", default=None)
|
||||
parser.add_argument("--version", action="version", version="rncp {version}".format(version=__version__))
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.receive or args.print_identity:
|
||||
receive(
|
||||
if args.listen or args.print_identity:
|
||||
listen(
|
||||
configdir = args.config,
|
||||
verbosity=args.verbose,
|
||||
quietness=args.quiet,
|
||||
|
@ -332,9 +678,25 @@ def main():
|
|||
display_identity=args.print_identity,
|
||||
# limit=args.limit,
|
||||
disable_auth=args.no_auth,
|
||||
disable_announce=args.no_announce,
|
||||
announce=args.b,
|
||||
)
|
||||
|
||||
elif args.fetch:
|
||||
if args.destination != None and args.file != None:
|
||||
fetch(
|
||||
configdir = args.config,
|
||||
verbosity = args.verbose,
|
||||
quietness = args.quiet,
|
||||
destination = args.destination,
|
||||
file = args.file,
|
||||
timeout = args.w,
|
||||
silent = args.silent,
|
||||
)
|
||||
else:
|
||||
print("")
|
||||
parser.print_help()
|
||||
print("")
|
||||
|
||||
elif args.destination != None and args.file != None:
|
||||
send(
|
||||
configdir = args.config,
|
||||
|
@ -343,6 +705,7 @@ def main():
|
|||
destination = args.destination,
|
||||
file = args.file,
|
||||
timeout = args.w,
|
||||
silent = args.silent,
|
||||
)
|
||||
|
||||
else:
|
||||
|
|
|
@ -0,0 +1,600 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2023 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import RNS
|
||||
import argparse
|
||||
import time
|
||||
import sys
|
||||
import os
|
||||
import base64
|
||||
|
||||
from RNS._version import __version__
|
||||
|
||||
APP_NAME = "rnid"
|
||||
|
||||
SIG_EXT = "rsg"
|
||||
ENCRYPT_EXT = "rfe"
|
||||
CHUNK_SIZE = 16*1024*1024
|
||||
|
||||
def spin(until=None, msg=None, timeout=None):
|
||||
i = 0
|
||||
syms = "⢄⢂⢁⡁⡈⡐⡠"
|
||||
if timeout != None:
|
||||
timeout = time.time()+timeout
|
||||
|
||||
print(msg+" ", end=" ")
|
||||
while (timeout == None or time.time()<timeout) and not until():
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
print("\r"+" "*len(msg)+" \r", end="")
|
||||
|
||||
if timeout != None and time.time() > timeout:
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def main():
|
||||
try:
|
||||
parser = argparse.ArgumentParser(description="Reticulum Identity & Encryption Utility")
|
||||
# parser.add_argument("file", nargs="?", default=None, help="input file path", type=str)
|
||||
|
||||
parser.add_argument("--config", metavar="path", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
|
||||
parser.add_argument("-i", "--identity", metavar="identity", action="store", default=None, help="hexadecimal Reticulum Destination hash or path to Identity file", type=str)
|
||||
parser.add_argument("-g", "--generate", metavar="file", action="store", default=None, help="generate a new Identity")
|
||||
parser.add_argument("-m", "--import", dest="import_str", metavar="identity_data", action="store", default=None, help="import Reticulum identity in hex, base32 or base64 format", type=str)
|
||||
parser.add_argument("-x", "--export", action="store_true", default=None, help="export identity to hex, base32 or base64 format")
|
||||
|
||||
parser.add_argument("-v", "--verbose", action="count", default=0, help="increase verbosity")
|
||||
parser.add_argument("-q", "--quiet", action="count", default=0, help="decrease verbosity")
|
||||
|
||||
parser.add_argument("-a", "--announce", metavar="aspects", action="store", default=None, help="announce a destination based on this Identity")
|
||||
parser.add_argument("-H", "--hash", metavar="aspects", action="store", default=None, help="show destination hashes for other aspects for this Identity")
|
||||
parser.add_argument("-e", "--encrypt", metavar="file", action="store", default=None, help="encrypt file")
|
||||
parser.add_argument("-d", "--decrypt", metavar="file", action="store", default=None, help="decrypt file")
|
||||
parser.add_argument("-s", "--sign", metavar="path", action="store", default=None, help="sign file")
|
||||
parser.add_argument("-V", "--validate", metavar="path", action="store", default=None, help="validate signature")
|
||||
|
||||
parser.add_argument("-r", "--read", metavar="file", action="store", default=None, help="input file path", type=str)
|
||||
parser.add_argument("-w", "--write", metavar="file", action="store", default=None, help="output file path", type=str)
|
||||
parser.add_argument("-f", "--force", action="store_true", default=None, help="write output even if it overwrites existing files")
|
||||
parser.add_argument("-I", "--stdin", action="store_true", default=False, help=argparse.SUPPRESS) # "read input from STDIN instead of file"
|
||||
parser.add_argument("-O", "--stdout", action="store_true", default=False, help=argparse.SUPPRESS) # help="write output to STDOUT instead of file",
|
||||
|
||||
parser.add_argument("-R", "--request", action="store_true", default=False, help="request unknown Identities from the network")
|
||||
parser.add_argument("-t", action="store", metavar="seconds", type=float, help="identity request timeout before giving up", default=RNS.Transport.PATH_REQUEST_TIMEOUT)
|
||||
parser.add_argument("-p", "--print-identity", action="store_true", default=False, help="print identity info and exit")
|
||||
parser.add_argument("-P", "--print-private", action="store_true", default=False, help="allow displaying private keys")
|
||||
|
||||
parser.add_argument("-b", "--base64", action="store_true", default=False, help="Use base64-encoded input and output")
|
||||
parser.add_argument("-B", "--base32", action="store_true", default=False, help="Use base32-encoded input and output")
|
||||
|
||||
parser.add_argument("--version", action="version", version="rnid {version}".format(version=__version__))
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
ops = 0;
|
||||
for t in [args.encrypt, args.decrypt, args.validate, args.sign]:
|
||||
if t:
|
||||
ops += 1
|
||||
|
||||
if ops > 1:
|
||||
RNS.log("This utility currently only supports one of the encrypt, decrypt, sign or verify operations per invocation", RNS.LOG_ERROR)
|
||||
exit(1)
|
||||
|
||||
if not args.read:
|
||||
if args.encrypt:
|
||||
args.read = args.encrypt
|
||||
if args.decrypt:
|
||||
args.read = args.decrypt
|
||||
if args.sign:
|
||||
args.read = args.sign
|
||||
|
||||
identity_str = args.identity
|
||||
if args.import_str:
|
||||
identity_bytes = None
|
||||
try:
|
||||
if args.base64:
|
||||
identity_bytes = base64.urlsafe_b64decode(args.import_str)
|
||||
elif args.base32:
|
||||
identity_bytes = base64.b32decode(args.import_str)
|
||||
else:
|
||||
identity_bytes = bytes.fromhex(args.import_str)
|
||||
except Exception as e:
|
||||
print("Invalid identity data specified for import: "+str(e))
|
||||
exit(41)
|
||||
|
||||
try:
|
||||
identity = RNS.Identity.from_bytes(identity_bytes)
|
||||
except Exception as e:
|
||||
print("Could not create Reticulum identity from specified data: "+str(e))
|
||||
exit(42)
|
||||
|
||||
RNS.log("Identity imported")
|
||||
if args.base64:
|
||||
RNS.log("Public Key : "+base64.urlsafe_b64encode(identity.get_public_key()).decode("utf-8"))
|
||||
elif args.base32:
|
||||
RNS.log("Public Key : "+base64.b32encode(identity.get_public_key()).decode("utf-8"))
|
||||
else:
|
||||
RNS.log("Public Key : "+RNS.hexrep(identity.get_public_key(), delimit=False))
|
||||
if identity.prv:
|
||||
if args.print_private:
|
||||
if args.base64:
|
||||
RNS.log("Private Key : "+base64.urlsafe_b64encode(identity.get_private_key()).decode("utf-8"))
|
||||
elif args.base32:
|
||||
RNS.log("Private Key : "+base64.b32encode(identity.get_private_key()).decode("utf-8"))
|
||||
else:
|
||||
RNS.log("Private Key : "+RNS.hexrep(identity.get_private_key(), delimit=False))
|
||||
else:
|
||||
RNS.log("Private Key : Hidden")
|
||||
|
||||
if args.write:
|
||||
try:
|
||||
wp = os.path.expanduser(args.write)
|
||||
if not os.path.isfile(wp) or args.force:
|
||||
identity.to_file(wp)
|
||||
RNS.log("Wrote imported identity to "+str(args.write))
|
||||
else:
|
||||
print("File "+str(wp)+" already exists, not overwriting")
|
||||
exit(43)
|
||||
|
||||
except Exception as e:
|
||||
print("Error while writing imported identity to file: "+str(e))
|
||||
exit(44)
|
||||
|
||||
exit(0)
|
||||
|
||||
if not args.generate and not identity_str:
|
||||
print("\nNo identity provided, cannot continue\n")
|
||||
parser.print_help()
|
||||
print("")
|
||||
exit(2)
|
||||
|
||||
else:
|
||||
targetloglevel = 4
|
||||
verbosity = args.verbose
|
||||
quietness = args.quiet
|
||||
if verbosity != 0 or quietness != 0:
|
||||
targetloglevel = targetloglevel+verbosity-quietness
|
||||
|
||||
# Start Reticulum
|
||||
reticulum = RNS.Reticulum(configdir=args.config, loglevel=targetloglevel)
|
||||
RNS.compact_log_fmt = True
|
||||
if args.stdout:
|
||||
RNS.loglevel = -1
|
||||
|
||||
if args.generate:
|
||||
identity = RNS.Identity()
|
||||
if not args.force and os.path.isfile(args.generate):
|
||||
RNS.log("Identity file "+str(args.generate)+" already exists. Not overwriting.", RNS.LOG_ERROR)
|
||||
exit(3)
|
||||
else:
|
||||
try:
|
||||
identity.to_file(args.generate)
|
||||
RNS.log("New identity written to "+str(args.generate))
|
||||
exit(0)
|
||||
except Exception as e:
|
||||
RNS.log("An error ocurred while saving the generated Identity.", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
exit(4)
|
||||
|
||||
identity = None
|
||||
if len(identity_str) == RNS.Reticulum.TRUNCATED_HASHLENGTH//8*2 and not os.path.isfile(identity_str):
|
||||
# Try recalling Identity from hex-encoded hash
|
||||
try:
|
||||
destination_hash = bytes.fromhex(identity_str)
|
||||
identity = RNS.Identity.recall(destination_hash)
|
||||
|
||||
if identity == None:
|
||||
if not args.request:
|
||||
RNS.log("Could not recall Identity for "+RNS.prettyhexrep(destination_hash)+".", RNS.LOG_ERROR)
|
||||
RNS.log("You can query the network for unknown Identities with the -R option.", RNS.LOG_ERROR)
|
||||
exit(5)
|
||||
else:
|
||||
RNS.Transport.request_path(destination_hash)
|
||||
def spincheck():
|
||||
return RNS.Identity.recall(destination_hash) != None
|
||||
spin(spincheck, "Requesting unknown Identity for "+RNS.prettyhexrep(destination_hash), args.t)
|
||||
|
||||
if not spincheck():
|
||||
RNS.log("Identity request timed out", RNS.LOG_ERROR)
|
||||
exit(6)
|
||||
else:
|
||||
identity = RNS.Identity.recall(destination_hash)
|
||||
RNS.log("Received Identity "+str(identity)+" for destination "+RNS.prettyhexrep(destination_hash)+" from the network")
|
||||
|
||||
else:
|
||||
RNS.log("Recalled Identity "+str(identity)+" for destination "+RNS.prettyhexrep(destination_hash))
|
||||
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Invalid hexadecimal hash provided", RNS.LOG_ERROR)
|
||||
exit(7)
|
||||
|
||||
|
||||
else:
|
||||
# Try loading Identity from file
|
||||
if not os.path.isfile(identity_str):
|
||||
RNS.log("Specified Identity file not found")
|
||||
exit(8)
|
||||
else:
|
||||
try:
|
||||
identity = RNS.Identity.from_file(identity_str)
|
||||
RNS.log("Loaded Identity "+str(identity)+" from "+str(identity_str))
|
||||
|
||||
except Exception as e:
|
||||
RNS.log("Could not decode Identity from specified file")
|
||||
exit(9)
|
||||
|
||||
if identity != None:
|
||||
if args.hash:
|
||||
try:
|
||||
aspects = args.hash.split(".")
|
||||
if not len(aspects) > 0:
|
||||
RNS.log("Invalid destination aspects specified", RNS.LOG_ERROR)
|
||||
exit(32)
|
||||
else:
|
||||
app_name = aspects[0]
|
||||
aspects = aspects[1:]
|
||||
if identity.pub != None:
|
||||
destination = RNS.Destination(identity, RNS.Destination.OUT, RNS.Destination.SINGLE, app_name, *aspects)
|
||||
RNS.log("The "+str(args.hash)+" destination for this Identity is "+RNS.prettyhexrep(destination.hash))
|
||||
RNS.log("The full destination specifier is "+str(destination))
|
||||
time.sleep(0.25)
|
||||
exit(0)
|
||||
else:
|
||||
raise KeyError("No public key known")
|
||||
except Exception as e:
|
||||
RNS.log("An error ocurred while attempting to send the announce.", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
exit(0)
|
||||
|
||||
if args.announce:
|
||||
try:
|
||||
aspects = args.announce.split(".")
|
||||
if not len(aspects) > 1:
|
||||
RNS.log("Invalid destination aspects specified", RNS.LOG_ERROR)
|
||||
exit(32)
|
||||
else:
|
||||
app_name = aspects[0]
|
||||
aspects = aspects[1:]
|
||||
if identity.prv != None:
|
||||
destination = RNS.Destination(identity, RNS.Destination.IN, RNS.Destination.SINGLE, app_name, *aspects)
|
||||
RNS.log("Created destination "+str(destination))
|
||||
RNS.log("Announcing destination "+RNS.prettyhexrep(destination.hash))
|
||||
destination.announce()
|
||||
time.sleep(0.25)
|
||||
exit(0)
|
||||
else:
|
||||
destination = RNS.Destination(identity, RNS.Destination.OUT, RNS.Destination.SINGLE, app_name, *aspects)
|
||||
RNS.log("The "+str(args.announce)+" destination for this Identity is "+RNS.prettyhexrep(destination.hash))
|
||||
RNS.log("The full destination specifier is "+str(destination))
|
||||
RNS.log("Cannot announce this destination, since the private key is not held")
|
||||
time.sleep(0.25)
|
||||
exit(33)
|
||||
except Exception as e:
|
||||
RNS.log("An error ocurred while attempting to send the announce.", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
|
||||
exit(0)
|
||||
|
||||
if args.print_identity:
|
||||
if args.base64:
|
||||
RNS.log("Public Key : "+base64.urlsafe_b64encode(identity.get_public_key()).decode("utf-8"))
|
||||
elif args.base32:
|
||||
RNS.log("Public Key : "+base64.b32encode(identity.get_public_key()).decode("utf-8"))
|
||||
else:
|
||||
RNS.log("Public Key : "+RNS.hexrep(identity.get_public_key(), delimit=False))
|
||||
if identity.prv:
|
||||
if args.print_private:
|
||||
if args.base64:
|
||||
RNS.log("Private Key : "+base64.urlsafe_b64encode(identity.get_private_key()).decode("utf-8"))
|
||||
elif args.base32:
|
||||
RNS.log("Private Key : "+base64.b32encode(identity.get_private_key()).decode("utf-8"))
|
||||
else:
|
||||
RNS.log("Private Key : "+RNS.hexrep(identity.get_private_key(), delimit=False))
|
||||
else:
|
||||
RNS.log("Private Key : Hidden")
|
||||
exit(0)
|
||||
|
||||
if args.export:
|
||||
if identity.prv:
|
||||
if args.base64:
|
||||
RNS.log("Exported Identity : "+base64.urlsafe_b64encode(identity.get_private_key()).decode("utf-8"))
|
||||
elif args.base32:
|
||||
RNS.log("Exported Identity : "+base64.b32encode(identity.get_private_key()).decode("utf-8"))
|
||||
else:
|
||||
RNS.log("Exported Identity : "+RNS.hexrep(identity.get_private_key(), delimit=False))
|
||||
else:
|
||||
RNS.log("Identity doesn't hold a private key, cannot export")
|
||||
exit(50)
|
||||
|
||||
exit(0)
|
||||
|
||||
if args.validate:
|
||||
if not args.read and args.validate.lower().endswith("."+SIG_EXT):
|
||||
args.read = str(args.validate).replace("."+SIG_EXT, "")
|
||||
|
||||
if not os.path.isfile(args.validate):
|
||||
RNS.log("Signature file "+str(args.read)+" not found", RNS.LOG_ERROR)
|
||||
exit(10)
|
||||
|
||||
if not os.path.isfile(args.read):
|
||||
RNS.log("Input file "+str(args.read)+" not found", RNS.LOG_ERROR)
|
||||
exit(11)
|
||||
|
||||
data_input = None
|
||||
if args.read:
|
||||
if not os.path.isfile(args.read):
|
||||
RNS.log("Input file "+str(args.read)+" not found", RNS.LOG_ERROR)
|
||||
exit(12)
|
||||
else:
|
||||
try:
|
||||
data_input = open(args.read, "rb")
|
||||
except Exception as e:
|
||||
RNS.log("Could not open input file for reading", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
exit(13)
|
||||
|
||||
# TODO: Actually expand this to a good solution
|
||||
# probably need to create a wrapper that takes
|
||||
# into account not closing stdin when done
|
||||
# elif args.stdin:
|
||||
# data_input = sys.stdin
|
||||
|
||||
data_output = None
|
||||
if args.encrypt and not args.write and not args.stdout and args.read:
|
||||
args.write = str(args.read)+"."+ENCRYPT_EXT
|
||||
|
||||
if args.decrypt and not args.write and not args.stdout and args.read and args.read.lower().endswith("."+ENCRYPT_EXT):
|
||||
args.write = str(args.read).replace("."+ENCRYPT_EXT, "")
|
||||
|
||||
if args.sign and identity.prv == None:
|
||||
RNS.log("Specified Identity does not hold a private key. Cannot sign.", RNS.LOG_ERROR)
|
||||
exit(14)
|
||||
|
||||
if args.sign and not args.write and not args.stdout and args.read:
|
||||
args.write = str(args.read)+"."+SIG_EXT
|
||||
|
||||
if args.write:
|
||||
if not args.force and os.path.isfile(args.write):
|
||||
RNS.log("Output file "+str(args.write)+" already exists. Not overwriting.", RNS.LOG_ERROR)
|
||||
exit(15)
|
||||
else:
|
||||
try:
|
||||
data_output = open(args.write, "wb")
|
||||
except Exception as e:
|
||||
RNS.log("Could not open output file for writing", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
exit(15)
|
||||
|
||||
# TODO: Actually expand this to a good solution
|
||||
# probably need to create a wrapper that takes
|
||||
# into account not closing stdout when done
|
||||
# elif args.stdout:
|
||||
# data_output = sys.stdout
|
||||
|
||||
if args.sign:
|
||||
if identity.prv == None:
|
||||
RNS.log("Specified Identity does not hold a private key. Cannot sign.", RNS.LOG_ERROR)
|
||||
exit(16)
|
||||
|
||||
if not data_input:
|
||||
if not args.stdout:
|
||||
RNS.log("Signing requested, but no input data specified", RNS.LOG_ERROR)
|
||||
exit(17)
|
||||
else:
|
||||
if not data_output:
|
||||
if not args.stdout:
|
||||
RNS.log("Signing requested, but no output specified", RNS.LOG_ERROR)
|
||||
exit(18)
|
||||
|
||||
if not args.stdout:
|
||||
RNS.log("Signing "+str(args.read))
|
||||
|
||||
try:
|
||||
data_output.write(identity.sign(data_input.read()))
|
||||
data_output.close()
|
||||
data_input.close()
|
||||
|
||||
if not args.stdout:
|
||||
if args.read:
|
||||
RNS.log("File "+str(args.read)+" signed with "+str(identity)+" to "+str(args.write))
|
||||
exit(0)
|
||||
|
||||
except Exception as e:
|
||||
if not args.stdout:
|
||||
RNS.log("An error ocurred while encrypting data.", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
try:
|
||||
data_output.close()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
data_input.close()
|
||||
except:
|
||||
pass
|
||||
exit(19)
|
||||
|
||||
if args.validate:
|
||||
if not data_input:
|
||||
if not args.stdout:
|
||||
RNS.log("Signature verification requested, but no input data specified", RNS.LOG_ERROR)
|
||||
exit(20)
|
||||
else:
|
||||
# if not args.stdout:
|
||||
# RNS.log("Verifying "+str(args.validate)+" for "+str(args.read))
|
||||
|
||||
try:
|
||||
try:
|
||||
sig_input = open(args.validate, "rb")
|
||||
except Exception as e:
|
||||
RNS.log("An error ocurred while opening "+str(args.validate)+".", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
exit(21)
|
||||
|
||||
|
||||
validated = identity.validate(sig_input.read(), data_input.read())
|
||||
sig_input.close()
|
||||
data_input.close()
|
||||
|
||||
if not validated:
|
||||
if not args.stdout:
|
||||
RNS.log("Signature "+str(args.validate)+" for file "+str(args.read)+" is invalid", RNS.LOG_ERROR)
|
||||
exit(22)
|
||||
else:
|
||||
if not args.stdout:
|
||||
RNS.log("Signature "+str(args.validate)+" for file "+str(args.read)+" made by Identity "+str(identity)+" is valid")
|
||||
exit(0)
|
||||
|
||||
except Exception as e:
|
||||
if not args.stdout:
|
||||
RNS.log("An error ocurred while validating signature.", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
try:
|
||||
data_output.close()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
data_input.close()
|
||||
except:
|
||||
pass
|
||||
exit(23)
|
||||
|
||||
if args.encrypt:
|
||||
if not data_input:
|
||||
if not args.stdout:
|
||||
RNS.log("Encryption requested, but no input data specified", RNS.LOG_ERROR)
|
||||
exit(24)
|
||||
else:
|
||||
if not data_output:
|
||||
if not args.stdout:
|
||||
RNS.log("Encryption requested, but no output specified", RNS.LOG_ERROR)
|
||||
exit(25)
|
||||
|
||||
if not args.stdout:
|
||||
RNS.log("Encrypting "+str(args.read))
|
||||
|
||||
try:
|
||||
more_data = True
|
||||
while more_data:
|
||||
chunk = data_input.read(CHUNK_SIZE)
|
||||
if chunk:
|
||||
data_output.write(identity.encrypt(chunk))
|
||||
else:
|
||||
more_data = False
|
||||
data_output.close()
|
||||
data_input.close()
|
||||
if not args.stdout:
|
||||
if args.read:
|
||||
RNS.log("File "+str(args.read)+" encrypted for "+str(identity)+" to "+str(args.write))
|
||||
exit(0)
|
||||
|
||||
except Exception as e:
|
||||
if not args.stdout:
|
||||
RNS.log("An error ocurred while encrypting data.", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
try:
|
||||
data_output.close()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
data_input.close()
|
||||
except:
|
||||
pass
|
||||
exit(26)
|
||||
|
||||
if args.decrypt:
|
||||
if identity.prv == None:
|
||||
RNS.log("Specified Identity does not hold a private key. Cannot decrypt.", RNS.LOG_ERROR)
|
||||
exit(27)
|
||||
|
||||
if not data_input:
|
||||
if not args.stdout:
|
||||
RNS.log("Decryption requested, but no input data specified", RNS.LOG_ERROR)
|
||||
exit(28)
|
||||
else:
|
||||
if not data_output:
|
||||
if not args.stdout:
|
||||
RNS.log("Decryption requested, but no output specified", RNS.LOG_ERROR)
|
||||
exit(29)
|
||||
|
||||
if not args.stdout:
|
||||
RNS.log("Decrypting "+str(args.read)+"...")
|
||||
|
||||
try:
|
||||
more_data = True
|
||||
while more_data:
|
||||
chunk = data_input.read(CHUNK_SIZE)
|
||||
if chunk:
|
||||
plaintext = identity.decrypt(chunk)
|
||||
if plaintext == None:
|
||||
if not args.stdout:
|
||||
RNS.log("Data could not be decrypted with the specified Identity")
|
||||
exit(30)
|
||||
else:
|
||||
data_output.write(plaintext)
|
||||
else:
|
||||
more_data = False
|
||||
data_output.close()
|
||||
data_input.close()
|
||||
if not args.stdout:
|
||||
if args.read:
|
||||
RNS.log("File "+str(args.read)+" decrypted with "+str(identity)+" to "+str(args.write))
|
||||
exit(0)
|
||||
|
||||
except Exception as e:
|
||||
if not args.stdout:
|
||||
RNS.log("An error ocurred while decrypting data.", RNS.LOG_ERROR)
|
||||
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
|
||||
try:
|
||||
data_output.close()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
data_input.close()
|
||||
except:
|
||||
pass
|
||||
exit(31)
|
||||
|
||||
if True:
|
||||
pass
|
||||
|
||||
elif False:
|
||||
pass
|
||||
|
||||
else:
|
||||
print("")
|
||||
parser.print_help()
|
||||
print("")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
exit(255)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -0,0 +1,74 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2023 Mark Qvist / unsigned.io
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import RNS
|
||||
import argparse
|
||||
import time
|
||||
|
||||
from RNS._version import __version__
|
||||
|
||||
|
||||
def program_setup(configdir, verbosity = 0, quietness = 0, service = False):
|
||||
targetverbosity = verbosity-quietness
|
||||
|
||||
if service:
|
||||
targetlogdest = RNS.LOG_FILE
|
||||
targetverbosity = None
|
||||
else:
|
||||
targetlogdest = RNS.LOG_STDOUT
|
||||
|
||||
reticulum = RNS.Reticulum(configdir=configdir, verbosity=targetverbosity, logdest=targetlogdest)
|
||||
exit(0)
|
||||
|
||||
def main():
|
||||
try:
|
||||
parser = argparse.ArgumentParser(description="Reticulum Distributed Identity Resolver")
|
||||
parser.add_argument("--config", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
|
||||
parser.add_argument('-v', '--verbose', action='count', default=0)
|
||||
parser.add_argument('-q', '--quiet', action='count', default=0)
|
||||
parser.add_argument("--exampleconfig", action='store_true', default=False, help="print verbose configuration example to stdout and exit")
|
||||
parser.add_argument("--version", action="version", version="ir {version}".format(version=__version__))
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.exampleconfig:
|
||||
print(__example_rns_config__)
|
||||
exit()
|
||||
|
||||
if args.config:
|
||||
configarg = args.config
|
||||
else:
|
||||
configarg = None
|
||||
|
||||
program_setup(configdir = configarg, verbosity=args.verbose, quietness=args.quiet)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
exit()
|
||||
|
||||
__example_rns_config__ = '''# This is an example Identity Resolver file.
|
||||
'''
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -30,7 +30,7 @@ import argparse
|
|||
from RNS._version import __version__
|
||||
|
||||
|
||||
def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity, timeout, drop_queues):
|
||||
def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity, timeout, drop_queues, drop_via):
|
||||
if table:
|
||||
destination_hash = None
|
||||
if destination_hexhash != None:
|
||||
|
@ -155,6 +155,29 @@ def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity,
|
|||
sys.exit(1)
|
||||
|
||||
|
||||
elif drop_via:
|
||||
try:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
if len(destination_hexhash) != dest_len:
|
||||
raise ValueError("Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
|
||||
try:
|
||||
destination_hash = bytes.fromhex(destination_hexhash)
|
||||
except Exception as e:
|
||||
raise ValueError("Invalid destination entered. Check your input.")
|
||||
except Exception as e:
|
||||
print(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
reticulum = RNS.Reticulum(configdir = configdir, loglevel = 3+verbosity)
|
||||
|
||||
if reticulum.drop_all_via(destination_hash):
|
||||
print("Dropped all paths via "+RNS.prettyhexrep(destination_hash))
|
||||
else:
|
||||
print("Unable to drop paths via "+RNS.prettyhexrep(destination_hash)+". Does the transport instance exist?")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
else:
|
||||
try:
|
||||
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
|
||||
|
@ -187,17 +210,22 @@ def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity,
|
|||
|
||||
if RNS.Transport.has_path(destination_hash):
|
||||
hops = RNS.Transport.hops_to(destination_hash)
|
||||
next_hop = RNS.prettyhexrep(reticulum.get_next_hop(destination_hash))
|
||||
next_hop_interface = reticulum.get_next_hop_if_name(destination_hash)
|
||||
|
||||
if hops != 1:
|
||||
ms = "s"
|
||||
next_hop_bytes = reticulum.get_next_hop(destination_hash)
|
||||
if next_hop_bytes == None:
|
||||
print("\r \rError: Invalid path data returned")
|
||||
sys.exit(1)
|
||||
else:
|
||||
ms = ""
|
||||
next_hop = RNS.prettyhexrep(next_hop_bytes)
|
||||
next_hop_interface = reticulum.get_next_hop_if_name(destination_hash)
|
||||
|
||||
print("\rPath found, destination "+RNS.prettyhexrep(destination_hash)+" is "+str(hops)+" hop"+ms+" away via "+next_hop+" on "+next_hop_interface)
|
||||
if hops != 1:
|
||||
ms = "s"
|
||||
else:
|
||||
ms = ""
|
||||
|
||||
print("\rPath found, destination "+RNS.prettyhexrep(destination_hash)+" is "+str(hops)+" hop"+ms+" away via "+next_hop+" on "+next_hop_interface)
|
||||
else:
|
||||
print("\r \rPath not found")
|
||||
print("\r \rPath not found")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
|
@ -251,6 +279,13 @@ def main():
|
|||
default=False
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-x", "--drop-via",
|
||||
action="store_true",
|
||||
help="drop all paths via specified transport instance",
|
||||
default=False
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-w",
|
||||
action="store",
|
||||
|
@ -277,7 +312,7 @@ def main():
|
|||
else:
|
||||
configarg = None
|
||||
|
||||
if not args.drop_announces and not args.table and not args.rates and not args.destination:
|
||||
if not args.drop_announces and not args.table and not args.rates and not args.destination and not args.drop_via:
|
||||
print("")
|
||||
parser.print_help()
|
||||
print("")
|
||||
|
@ -291,6 +326,7 @@ def main():
|
|||
verbosity = args.verbose,
|
||||
timeout = args.w,
|
||||
drop_queues = args.drop_announces,
|
||||
drop_via = args.drop_via,
|
||||
)
|
||||
sys.exit(0)
|
||||
|
||||
|
|
|
@ -31,8 +31,10 @@ import argparse
|
|||
from RNS._version import __version__
|
||||
|
||||
DEFAULT_PROBE_SIZE = 16
|
||||
DEFAULT_TIMEOUT = 12
|
||||
|
||||
def program_setup(configdir, destination_hexhash, size=DEFAULT_PROBE_SIZE, full_name = None, verbosity = 0):
|
||||
def program_setup(configdir, destination_hexhash, size=None, full_name = None, verbosity = 0, timeout=None, wait=0, probes=1):
|
||||
if size == None: size = DEFAULT_PROBE_SIZE
|
||||
if full_name == None:
|
||||
print("The full destination name including application name aspects must be specified for the destination")
|
||||
exit()
|
||||
|
@ -71,14 +73,19 @@ def program_setup(configdir, destination_hexhash, size=DEFAULT_PROBE_SIZE, full_
|
|||
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested ", end=" ")
|
||||
sys.stdout.flush()
|
||||
|
||||
_timeout = time.time() + (timeout or DEFAULT_TIMEOUT+reticulum.get_first_hop_timeout(destination_hash))
|
||||
i = 0
|
||||
syms = "⢄⢂⢁⡁⡈⡐⡠"
|
||||
while not RNS.Transport.has_path(destination_hash):
|
||||
while not RNS.Transport.has_path(destination_hash) and not time.time() > _timeout:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
if time.time() > _timeout:
|
||||
print("\r \rPath request timed out")
|
||||
exit(1)
|
||||
|
||||
server_identity = RNS.Identity.recall(destination_hash)
|
||||
|
||||
request_destination = RNS.Destination(
|
||||
|
@ -89,101 +96,120 @@ def program_setup(configdir, destination_hexhash, size=DEFAULT_PROBE_SIZE, full_
|
|||
*aspects
|
||||
)
|
||||
|
||||
probe = RNS.Packet(request_destination, os.urandom(size))
|
||||
receipt = probe.send()
|
||||
sent = 0
|
||||
replies = 0
|
||||
while probes:
|
||||
|
||||
if more_output:
|
||||
more = " via "+RNS.prettyhexrep(reticulum.get_next_hop(destination_hash))+" on "+str(reticulum.get_next_hop_if_name(destination_hash))
|
||||
else:
|
||||
more = ""
|
||||
if sent > 0:
|
||||
time.sleep(wait)
|
||||
|
||||
print("\rSent "+str(size)+" byte probe to "+RNS.prettyhexrep(destination_hash)+more+" ", end=" ")
|
||||
try:
|
||||
probe = RNS.Packet(request_destination, os.urandom(size))
|
||||
probe.pack()
|
||||
except OSError:
|
||||
print("Error: Probe packet size of "+str(len(probe.raw))+" bytes exceed MTU of "+str(RNS.Reticulum.MTU)+" bytes")
|
||||
exit(3)
|
||||
|
||||
i = 0
|
||||
while not receipt.status == RNS.PacketReceipt.DELIVERED:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
receipt = probe.send()
|
||||
sent += 1
|
||||
|
||||
print("\b\b ")
|
||||
sys.stdout.flush()
|
||||
if more_output:
|
||||
nhd = reticulum.get_next_hop(destination_hash)
|
||||
via_str = " via "+RNS.prettyhexrep(nhd) if nhd != None else ""
|
||||
if_str = " on "+str(reticulum.get_next_hop_if_name(destination_hash)) if reticulum.get_next_hop_if_name(destination_hash) != "None" else ""
|
||||
more = via_str+if_str
|
||||
else:
|
||||
more = ""
|
||||
|
||||
hops = RNS.Transport.hops_to(destination_hash)
|
||||
if hops != 1:
|
||||
ms = "s"
|
||||
else:
|
||||
ms = ""
|
||||
print("\rSent probe "+str(sent)+" ("+str(size)+" bytes) to "+RNS.prettyhexrep(destination_hash)+more+" ", end=" ")
|
||||
|
||||
rtt = receipt.get_rtt()
|
||||
if (rtt >= 1):
|
||||
rtt = round(rtt, 3)
|
||||
rttstring = str(rtt)+" seconds"
|
||||
else:
|
||||
rtt = round(rtt*1000, 3)
|
||||
rttstring = str(rtt)+" milliseconds"
|
||||
_timeout = time.time() + (timeout or DEFAULT_TIMEOUT+reticulum.get_first_hop_timeout(destination_hash))
|
||||
i = 0
|
||||
while receipt.status == RNS.PacketReceipt.SENT and not time.time() > _timeout:
|
||||
time.sleep(0.1)
|
||||
print(("\b\b"+syms[i]+" "), end="")
|
||||
sys.stdout.flush()
|
||||
i = (i+1)%len(syms)
|
||||
|
||||
reception_stats = ""
|
||||
if reticulum.is_connected_to_shared_instance:
|
||||
reception_rssi = reticulum.get_packet_rssi(receipt.proof_packet.packet_hash)
|
||||
reception_snr = reticulum.get_packet_snr(receipt.proof_packet.packet_hash)
|
||||
|
||||
if reception_rssi != None:
|
||||
reception_stats += " [RSSI "+str(reception_rssi)+" dBm]"
|
||||
if time.time() > _timeout:
|
||||
print("\r \rProbe timed out")
|
||||
|
||||
if reception_snr != None:
|
||||
reception_stats += " [SNR "+str(reception_snr)+" dB]"
|
||||
else:
|
||||
print("\b\b ")
|
||||
sys.stdout.flush()
|
||||
|
||||
if receipt.status == RNS.PacketReceipt.DELIVERED:
|
||||
replies += 1
|
||||
hops = RNS.Transport.hops_to(destination_hash)
|
||||
if hops != 1:
|
||||
ms = "s"
|
||||
else:
|
||||
ms = ""
|
||||
|
||||
rtt = receipt.get_rtt()
|
||||
if (rtt >= 1):
|
||||
rtt = round(rtt, 3)
|
||||
rttstring = str(rtt)+" seconds"
|
||||
else:
|
||||
rtt = round(rtt*1000, 3)
|
||||
rttstring = str(rtt)+" milliseconds"
|
||||
|
||||
reception_stats = ""
|
||||
if reticulum.is_connected_to_shared_instance:
|
||||
reception_rssi = reticulum.get_packet_rssi(receipt.proof_packet.packet_hash)
|
||||
reception_snr = reticulum.get_packet_snr(receipt.proof_packet.packet_hash)
|
||||
reception_q = reticulum.get_packet_q(receipt.proof_packet.packet_hash)
|
||||
|
||||
if reception_rssi != None:
|
||||
reception_stats += " [RSSI "+str(reception_rssi)+" dBm]"
|
||||
|
||||
if reception_snr != None:
|
||||
reception_stats += " [SNR "+str(reception_snr)+" dB]"
|
||||
|
||||
if reception_q != None:
|
||||
reception_stats += " [Link Quality "+str(reception_q)+"%]"
|
||||
|
||||
else:
|
||||
if receipt.proof_packet != None:
|
||||
if receipt.proof_packet.rssi != None:
|
||||
reception_stats += " [RSSI "+str(receipt.proof_packet.rssi)+" dBm]"
|
||||
|
||||
if receipt.proof_packet.snr != None:
|
||||
reception_stats += " [SNR "+str(receipt.proof_packet.snr)+" dB]"
|
||||
|
||||
print(
|
||||
"Valid reply from "+
|
||||
RNS.prettyhexrep(receipt.destination.hash)+
|
||||
"\nRound-trip time is "+rttstring+
|
||||
" over "+str(hops)+" hop"+ms+
|
||||
reception_stats+"\n"
|
||||
)
|
||||
|
||||
else:
|
||||
print("\r \rProbe timed out")
|
||||
|
||||
probes -= 1
|
||||
|
||||
loss = round((1-(replies/sent))*100, 2)
|
||||
print(f"Sent {sent}, received {replies}, packet loss {loss}%")
|
||||
if loss > 0:
|
||||
exit(2)
|
||||
else:
|
||||
if receipt.proof_packet != None:
|
||||
if receipt.proof_packet.rssi != None:
|
||||
reception_stats += " [RSSI "+str(receipt.proof_packet.rssi)+" dBm]"
|
||||
|
||||
if receipt.proof_packet.snr != None:
|
||||
reception_stats += " [SNR "+str(receipt.proof_packet.snr)+" dB]"
|
||||
|
||||
print(
|
||||
"Valid reply received from "+
|
||||
RNS.prettyhexrep(receipt.destination.hash)+
|
||||
"\nRound-trip time is "+rttstring+
|
||||
" over "+str(hops)+" hop"+ms+
|
||||
reception_stats
|
||||
)
|
||||
|
||||
exit(0)
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
parser = argparse.ArgumentParser(description="Reticulum Probe Utility")
|
||||
|
||||
parser.add_argument("--config",
|
||||
action="store",
|
||||
default=None,
|
||||
help="path to alternative Reticulum config directory",
|
||||
type=str
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--version",
|
||||
action="version",
|
||||
version="rnprobe {version}".format(version=__version__)
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"full_name",
|
||||
nargs="?",
|
||||
default=None,
|
||||
help="full destination name in dotted notation",
|
||||
type=str
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"destination_hash",
|
||||
nargs="?",
|
||||
default=None,
|
||||
help="hexadecimal hash of the destination",
|
||||
type=str
|
||||
)
|
||||
parser.add_argument("--config", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
|
||||
parser.add_argument("-s", "--size", action="store", default=None, help="size of probe packet payload in bytes", type=int)
|
||||
parser.add_argument("-n", "--probes", action="store", default=1, help="number of probes to send", type=int)
|
||||
parser.add_argument("-t", "--timeout", metavar="seconds", action="store", default=None, help="timeout before giving up", type=float)
|
||||
parser.add_argument("-w", "--wait", metavar="seconds", action="store", default=0, help="time between each probe", type=float)
|
||||
parser.add_argument("--version", action="version", version="rnprobe {version}".format(version=__version__))
|
||||
parser.add_argument("full_name", nargs="?", default=None, help="full destination name in dotted notation", type=str)
|
||||
parser.add_argument("destination_hash", nargs="?", default=None, help="hexadecimal hash of the destination", type=str)
|
||||
|
||||
parser.add_argument('-v', '--verbose', action='count', default=0)
|
||||
|
||||
|
@ -202,8 +228,12 @@ def main():
|
|||
program_setup(
|
||||
configdir = configarg,
|
||||
destination_hexhash = args.destination_hash,
|
||||
size = args.size,
|
||||
full_name = args.full_name,
|
||||
verbosity = args.verbose
|
||||
verbosity = args.verbose,
|
||||
probes = args.probes,
|
||||
wait = args.wait,
|
||||
timeout = args.timeout,
|
||||
)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
|
|
|
@ -30,15 +30,20 @@ from RNS._version import __version__
|
|||
|
||||
|
||||
def program_setup(configdir, verbosity = 0, quietness = 0, service = False):
|
||||
targetloglevel = 3+verbosity-quietness
|
||||
targetverbosity = verbosity-quietness
|
||||
|
||||
if service:
|
||||
RNS.logdest = RNS.LOG_FILE
|
||||
RNS.logfile = RNS.Reticulum.configdir+"/logfile"
|
||||
targetloglevel = None
|
||||
targetlogdest = RNS.LOG_FILE
|
||||
targetverbosity = None
|
||||
else:
|
||||
targetlogdest = RNS.LOG_STDOUT
|
||||
|
||||
reticulum = RNS.Reticulum(configdir=configdir, verbosity=targetverbosity, logdest=targetlogdest)
|
||||
if reticulum.is_connected_to_shared_instance:
|
||||
RNS.log("Started rnsd version {version} connected to another shared local instance, this is probably NOT what you want!".format(version=__version__), RNS.LOG_WARNING)
|
||||
else:
|
||||
RNS.log("Started rnsd version {version}".format(version=__version__), RNS.LOG_NOTICE)
|
||||
|
||||
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
|
||||
RNS.log("Started rnsd version {version}".format(version=__version__), RNS.LOG_NOTICE)
|
||||
while True:
|
||||
time.sleep(1)
|
||||
|
||||
|
@ -82,7 +87,7 @@ __example_rns_config__ = '''# This is an example Reticulum config file.
|
|||
# always-on. This directive is optional and can be removed
|
||||
# for brevity.
|
||||
|
||||
enable_transport = False
|
||||
enable_transport = No
|
||||
|
||||
|
||||
# By default, the first program to launch the Reticulum
|
||||
|
@ -106,6 +111,18 @@ share_instance = Yes
|
|||
shared_instance_port = 37428
|
||||
instance_control_port = 37429
|
||||
|
||||
|
||||
# On systems where running instances may not have access
|
||||
# to the same shared Reticulum configuration directory,
|
||||
# it is still possible to allow full interactivity for
|
||||
# running instances, by manually specifying a shared RPC
|
||||
# key. In almost all cases, this option is not needed, but
|
||||
# it can be useful on operating systems such as Android.
|
||||
# The key must be specified as bytes in hexadecimal.
|
||||
|
||||
# rpc_key = e5c032d3ec4e64a6aca9927ba8ab73336780f6d71790
|
||||
|
||||
|
||||
# You can configure Reticulum to panic and forcibly close
|
||||
# if an unrecoverable interface error occurs, such as the
|
||||
# hardware device for an interface disappearing. This is
|
||||
|
@ -115,6 +132,17 @@ instance_control_port = 37429
|
|||
panic_on_interface_error = No
|
||||
|
||||
|
||||
# When Transport is enabled, it is possible to allow the
|
||||
# Transport Instance to respond to probe requests from
|
||||
# the rnprobe utility. This can be a useful tool to test
|
||||
# connectivity. When this option is enabled, the probe
|
||||
# destination will be generated from the Identity of the
|
||||
# Transport Instance, and printed to the log at startup.
|
||||
# Optional, and disabled by default.
|
||||
|
||||
respond_to_probes = No
|
||||
|
||||
|
||||
[logging]
|
||||
# Valid log levels are 0 through 7:
|
||||
# 0: Log only critical information
|
||||
|
|
|
@ -46,7 +46,7 @@ def size_str(num, suffix='B'):
|
|||
|
||||
return "%.2f%s%s" % (num, last_unit, suffix)
|
||||
|
||||
def program_setup(configdir, dispall=False, verbosity = 0):
|
||||
def program_setup(configdir, dispall=False, verbosity=0, name_filter=None, json=False, astats=False, sorting=None, sort_reverse=False):
|
||||
reticulum = RNS.Reticulum(configdir = configdir, loglevel = 3+verbosity)
|
||||
|
||||
stats = None
|
||||
|
@ -56,7 +56,44 @@ def program_setup(configdir, dispall=False, verbosity = 0):
|
|||
pass
|
||||
|
||||
if stats != None:
|
||||
for ifstat in stats["interfaces"]:
|
||||
if json:
|
||||
import json
|
||||
for s in stats:
|
||||
if isinstance(stats[s], bytes):
|
||||
stats[s] = RNS.hexrep(stats[s], delimit=False)
|
||||
|
||||
if isinstance(stats[s], dict):
|
||||
for i in stats[s]:
|
||||
if isinstance(i, dict):
|
||||
for k in i:
|
||||
if isinstance(i[k], bytes):
|
||||
i[k] = RNS.hexrep(i[k], delimit=False)
|
||||
|
||||
print(json.dumps(stats))
|
||||
exit()
|
||||
|
||||
interfaces = stats["interfaces"]
|
||||
if sorting != None and isinstance(sorting, str):
|
||||
sorting = sorting.lower()
|
||||
if sorting == "rate" or sorting == "bitrate":
|
||||
interfaces.sort(key=lambda i: i["bitrate"], reverse=not sort_reverse)
|
||||
if sorting == "rx":
|
||||
interfaces.sort(key=lambda i: i["rxb"], reverse=not sort_reverse)
|
||||
if sorting == "tx":
|
||||
interfaces.sort(key=lambda i: i["txb"], reverse=not sort_reverse)
|
||||
if sorting == "traffic":
|
||||
interfaces.sort(key=lambda i: i["rxb"]+i["txb"], reverse=not sort_reverse)
|
||||
if sorting == "announces" or sorting == "announce":
|
||||
interfaces.sort(key=lambda i: i["incoming_announce_frequency"]+i["outgoing_announce_frequency"], reverse=not sort_reverse)
|
||||
if sorting == "arx":
|
||||
interfaces.sort(key=lambda i: i["incoming_announce_frequency"], reverse=not sort_reverse)
|
||||
if sorting == "atx":
|
||||
interfaces.sort(key=lambda i: i["outgoing_announce_frequency"], reverse=not sort_reverse)
|
||||
if sorting == "held":
|
||||
interfaces.sort(key=lambda i: i["held_announces"], reverse=not sort_reverse)
|
||||
|
||||
|
||||
for ifstat in interfaces:
|
||||
name = ifstat["name"]
|
||||
|
||||
if dispall or not (
|
||||
|
@ -67,91 +104,115 @@ def program_setup(configdir, dispall=False, verbosity = 0):
|
|||
):
|
||||
|
||||
if not (name.startswith("I2PInterface[") and ("i2p_connectable" in ifstat and ifstat["i2p_connectable"] == False)):
|
||||
print("")
|
||||
if name_filter == None or name_filter.lower() in name.lower():
|
||||
print("")
|
||||
|
||||
if ifstat["status"]:
|
||||
ss = "Up"
|
||||
else:
|
||||
ss = "Down"
|
||||
if ifstat["status"]:
|
||||
ss = "Up"
|
||||
else:
|
||||
ss = "Down"
|
||||
|
||||
if ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ACCESS_POINT:
|
||||
modestr = "Access Point"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_POINT_TO_POINT:
|
||||
modestr = "Point-to-Point"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ROAMING:
|
||||
modestr = "Roaming"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_BOUNDARY:
|
||||
modestr = "Boundary"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_GATEWAY:
|
||||
modestr = "Gateway"
|
||||
else:
|
||||
modestr = "Full"
|
||||
if ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ACCESS_POINT:
|
||||
modestr = "Access Point"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_POINT_TO_POINT:
|
||||
modestr = "Point-to-Point"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ROAMING:
|
||||
modestr = "Roaming"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_BOUNDARY:
|
||||
modestr = "Boundary"
|
||||
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_GATEWAY:
|
||||
modestr = "Gateway"
|
||||
else:
|
||||
modestr = "Full"
|
||||
|
||||
|
||||
if ifstat["clients"] != None:
|
||||
clients = ifstat["clients"]
|
||||
if name.startswith("Shared Instance["):
|
||||
cnum = max(clients-1,0)
|
||||
if cnum == 1:
|
||||
spec_str = " program"
|
||||
else:
|
||||
spec_str = " programs"
|
||||
|
||||
clients_string = "Serving : "+str(cnum)+spec_str
|
||||
elif name.startswith("I2PInterface["):
|
||||
if "i2p_connectable" in ifstat and ifstat["i2p_connectable"] == True:
|
||||
cnum = clients
|
||||
if ifstat["clients"] != None:
|
||||
clients = ifstat["clients"]
|
||||
if name.startswith("Shared Instance["):
|
||||
cnum = max(clients-1,0)
|
||||
if cnum == 1:
|
||||
spec_str = " connected I2P endpoint"
|
||||
spec_str = " program"
|
||||
else:
|
||||
spec_str = " connected I2P endpoints"
|
||||
spec_str = " programs"
|
||||
|
||||
clients_string = "Peers : "+str(cnum)+spec_str
|
||||
clients_string = "Serving : "+str(cnum)+spec_str
|
||||
elif name.startswith("I2PInterface["):
|
||||
if "i2p_connectable" in ifstat and ifstat["i2p_connectable"] == True:
|
||||
cnum = clients
|
||||
if cnum == 1:
|
||||
spec_str = " connected I2P endpoint"
|
||||
else:
|
||||
spec_str = " connected I2P endpoints"
|
||||
|
||||
clients_string = "Peers : "+str(cnum)+spec_str
|
||||
else:
|
||||
clients_string = ""
|
||||
else:
|
||||
clients_string = ""
|
||||
clients_string = "Clients : "+str(clients)
|
||||
|
||||
else:
|
||||
clients_string = "Clients : "+str(clients)
|
||||
clients = None
|
||||
|
||||
else:
|
||||
clients = None
|
||||
print(" {n}".format(n=ifstat["name"]))
|
||||
|
||||
print(" {n}".format(n=ifstat["name"]))
|
||||
if "ifac_netname" in ifstat and ifstat["ifac_netname"] != None:
|
||||
print(" Network : {nn}".format(nn=ifstat["ifac_netname"]))
|
||||
|
||||
if "ifac_netname" in ifstat and ifstat["ifac_netname"] != None:
|
||||
print(" Network : {nn}".format(nn=ifstat["ifac_netname"]))
|
||||
print(" Status : {ss}".format(ss=ss))
|
||||
|
||||
print(" Status : {ss}".format(ss=ss))
|
||||
if clients != None and clients_string != "":
|
||||
print(" "+clients_string)
|
||||
|
||||
if clients != None and clients_string != "":
|
||||
print(" "+clients_string)
|
||||
if not (name.startswith("Shared Instance[") or name.startswith("TCPInterface[Client") or name.startswith("LocalInterface[")):
|
||||
print(" Mode : {mode}".format(mode=modestr))
|
||||
|
||||
if not (name.startswith("Shared Instance[") or name.startswith("TCPInterface[Client") or name.startswith("LocalInterface[")):
|
||||
print(" Mode : {mode}".format(mode=modestr))
|
||||
if "bitrate" in ifstat and ifstat["bitrate"] != None:
|
||||
print(" Rate : {ss}".format(ss=speed_str(ifstat["bitrate"])))
|
||||
|
||||
if "bitrate" in ifstat and ifstat["bitrate"] != None:
|
||||
print(" Rate : {ss}".format(ss=speed_str(ifstat["bitrate"])))
|
||||
|
||||
if "peers" in ifstat and ifstat["peers"] != None:
|
||||
print(" Peers : {np} reachable".format(np=ifstat["peers"]))
|
||||
if "airtime_short" in ifstat and "airtime_long" in ifstat:
|
||||
print(" Airtime : {ats}% (15s), {atl}% (1h)".format(ats=str(ifstat["airtime_short"]),atl=str(ifstat["airtime_long"])))
|
||||
|
||||
if "channel_load_short" in ifstat and "channel_load_long" in ifstat:
|
||||
print(" Ch.Load : {ats}% (15s), {atl}% (1h)".format(ats=str(ifstat["channel_load_short"]),atl=str(ifstat["channel_load_long"])))
|
||||
|
||||
if "peers" in ifstat and ifstat["peers"] != None:
|
||||
print(" Peers : {np} reachable".format(np=ifstat["peers"]))
|
||||
|
||||
if "ifac_signature" in ifstat and ifstat["ifac_signature"] != None:
|
||||
sigstr = "<…"+RNS.hexrep(ifstat["ifac_signature"][-5:], delimit=False)+">"
|
||||
print(" Access : {nb}-bit IFAC by {sig}".format(nb=ifstat["ifac_size"]*8, sig=sigstr))
|
||||
|
||||
if "i2p_b32" in ifstat and ifstat["i2p_b32"] != None:
|
||||
print(" I2P B32 : {ep}".format(ep=str(ifstat["i2p_b32"])))
|
||||
if "tunnelstate" in ifstat and ifstat["tunnelstate"] != None:
|
||||
print(" I2P : {ts}".format(ts=ifstat["tunnelstate"]))
|
||||
|
||||
if "announce_queue" in ifstat and ifstat["announce_queue"] != None and ifstat["announce_queue"] > 0:
|
||||
aqn = ifstat["announce_queue"]
|
||||
if aqn == 1:
|
||||
print(" Queued : {np} announce".format(np=aqn))
|
||||
else:
|
||||
print(" Queued : {np} announces".format(np=aqn))
|
||||
|
||||
print(" Traffic : {txb}↑\n {rxb}↓".format(rxb=size_str(ifstat["rxb"]), txb=size_str(ifstat["txb"])))
|
||||
if "ifac_signature" in ifstat and ifstat["ifac_signature"] != None:
|
||||
sigstr = "<…"+RNS.hexrep(ifstat["ifac_signature"][-5:], delimit=False)+">"
|
||||
print(" Access : {nb}-bit IFAC by {sig}".format(nb=ifstat["ifac_size"]*8, sig=sigstr))
|
||||
|
||||
if "i2p_b32" in ifstat and ifstat["i2p_b32"] != None:
|
||||
print(" I2P B32 : {ep}".format(ep=str(ifstat["i2p_b32"])))
|
||||
|
||||
if astats and "announce_queue" in ifstat and ifstat["announce_queue"] != None and ifstat["announce_queue"] > 0:
|
||||
aqn = ifstat["announce_queue"]
|
||||
if aqn == 1:
|
||||
print(" Queued : {np} announce".format(np=aqn))
|
||||
else:
|
||||
print(" Queued : {np} announces".format(np=aqn))
|
||||
|
||||
if astats and "held_announces" in ifstat and ifstat["held_announces"] != None and ifstat["held_announces"] > 0:
|
||||
aqn = ifstat["held_announces"]
|
||||
if aqn == 1:
|
||||
print(" Held : {np} announce".format(np=aqn))
|
||||
else:
|
||||
print(" Held : {np} announces".format(np=aqn))
|
||||
|
||||
if astats and "incoming_announce_frequency" in ifstat and ifstat["incoming_announce_frequency"] != None:
|
||||
print(" Announces : {iaf}↑".format(iaf=RNS.prettyfrequency(ifstat["outgoing_announce_frequency"])))
|
||||
print(" {iaf}↓".format(iaf=RNS.prettyfrequency(ifstat["incoming_announce_frequency"])))
|
||||
|
||||
print(" Traffic : {txb}↑\n {rxb}↓".format(rxb=size_str(ifstat["rxb"]), txb=size_str(ifstat["txb"])))
|
||||
|
||||
if "transport_id" in stats and stats["transport_id"] != None:
|
||||
print("\n Reticulum Transport Instance "+RNS.prettyhexrep(stats["transport_id"])+" is running")
|
||||
print("\n Transport Instance "+RNS.prettyhexrep(stats["transport_id"])+" running")
|
||||
if "probe_responder" in stats and stats["probe_responder"] != None:
|
||||
print(" Probe responder at "+RNS.prettyhexrep(stats["probe_responder"])+ " active")
|
||||
print(" Uptime is "+RNS.prettytime(stats["transport_uptime"]))
|
||||
|
||||
print("")
|
||||
|
||||
|
@ -171,8 +232,43 @@ def main():
|
|||
help="show all interfaces",
|
||||
default=False
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-A",
|
||||
"--announce-stats",
|
||||
action="store_true",
|
||||
help="show announce stats",
|
||||
default=False
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-s",
|
||||
"--sort",
|
||||
action="store",
|
||||
help="sort interfaces by [rate, traffic, rx, tx, announces, arx, atx, held]",
|
||||
default=None,
|
||||
type=str
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-r",
|
||||
"--reverse",
|
||||
action="store_true",
|
||||
help="reverse sorting",
|
||||
default=False,
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-j",
|
||||
"--json",
|
||||
action="store_true",
|
||||
help="output in JSON format",
|
||||
default=False
|
||||
)
|
||||
|
||||
parser.add_argument('-v', '--verbose', action='count', default=0)
|
||||
|
||||
parser.add_argument("filter", nargs="?", default=None, help="only display interfaces with names including filter", type=str)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
|
@ -181,7 +277,16 @@ def main():
|
|||
else:
|
||||
configarg = None
|
||||
|
||||
program_setup(configdir = configarg, dispall = args.all, verbosity=args.verbose)
|
||||
program_setup(
|
||||
configdir = configarg,
|
||||
dispall = args.all,
|
||||
verbosity=args.verbose,
|
||||
name_filter=args.filter,
|
||||
json=args.json,
|
||||
astats=args.announce_stats,
|
||||
sorting=args.sort,
|
||||
sort_reverse=args.reverse,
|
||||
)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
|
|
|
@ -337,13 +337,15 @@ def execute(configdir, identitypath = None, verbosity = 0, quietness = 0, detail
|
|||
|
||||
if link == None or link.status == RNS.Link.CLOSED or link.status == RNS.Link.PENDING:
|
||||
link = RNS.Link(listener_destination)
|
||||
link.did_identify = False
|
||||
|
||||
if not spin(until=lambda: link.status == RNS.Link.ACTIVE, msg="Establishing link with "+RNS.prettyhexrep(destination_hash), timeout=timeout):
|
||||
print("Could not establish link with "+RNS.prettyhexrep(destination_hash))
|
||||
exit(243)
|
||||
|
||||
if not noid:
|
||||
if not noid and not link.did_identify:
|
||||
link.identify(identity)
|
||||
link.did_identify = True
|
||||
|
||||
if stdin != None:
|
||||
stdin = stdin.encode("utf-8")
|
||||
|
@ -380,7 +382,10 @@ def execute(configdir, identitypath = None, verbosity = 0, quietness = 0, detail
|
|||
|
||||
if request_receipt.status == RNS.RequestReceipt.FAILED:
|
||||
print("Could not request remote execution")
|
||||
exit(244)
|
||||
if interactive:
|
||||
return
|
||||
else:
|
||||
exit(244)
|
||||
|
||||
spin(
|
||||
until=lambda:request_receipt.status != RNS.RequestReceipt.DELIVERED,
|
||||
|
@ -390,7 +395,10 @@ def execute(configdir, identitypath = None, verbosity = 0, quietness = 0, detail
|
|||
|
||||
if request_receipt.status == RNS.RequestReceipt.FAILED:
|
||||
print("No result was received")
|
||||
exit(245)
|
||||
if interactive:
|
||||
return
|
||||
else:
|
||||
exit(245)
|
||||
|
||||
spin_stat(
|
||||
until=lambda:request_receipt.status != RNS.RequestReceipt.RECEIVING,
|
||||
|
@ -399,7 +407,10 @@ def execute(configdir, identitypath = None, verbosity = 0, quietness = 0, detail
|
|||
|
||||
if request_receipt.status == RNS.RequestReceipt.FAILED:
|
||||
print("Receiving result failed")
|
||||
exit(246)
|
||||
if interactive:
|
||||
return
|
||||
else:
|
||||
exit(246)
|
||||
|
||||
if request_receipt.response != None:
|
||||
try:
|
||||
|
@ -414,7 +425,10 @@ def execute(configdir, identitypath = None, verbosity = 0, quietness = 0, detail
|
|||
|
||||
except Exception as e:
|
||||
print("Received invalid result")
|
||||
exit(247)
|
||||
if interactive:
|
||||
return
|
||||
else:
|
||||
exit(247)
|
||||
|
||||
if executed:
|
||||
if detailed:
|
||||
|
@ -484,7 +498,10 @@ def execute(configdir, identitypath = None, verbosity = 0, quietness = 0, detail
|
|||
exit(248)
|
||||
else:
|
||||
print("No response")
|
||||
exit(249)
|
||||
if interactive:
|
||||
return
|
||||
else:
|
||||
exit(249)
|
||||
|
||||
try:
|
||||
if not interactive:
|
||||
|
@ -521,7 +538,7 @@ def main():
|
|||
parser.add_argument("-x", '--interactive', action='store_true', default=False, help="enter interactive mode")
|
||||
parser.add_argument("-b", '--no-announce', action='store_true', default=False, help="don't announce at program start")
|
||||
parser.add_argument('-a', metavar="allowed_hash", dest="allowed", action='append', help="accept from this identity", type=str)
|
||||
parser.add_argument('-n', '--noauth', action='store_true', default=False, help="accept files from anyone")
|
||||
parser.add_argument('-n', '--noauth', action='store_true', default=False, help="accept commands from anyone")
|
||||
parser.add_argument('-N', '--noid', action='store_true', default=False, help="don't identify to listener")
|
||||
parser.add_argument("-d", '--detailed', action='store_true', default=False, help="show detailed result output")
|
||||
parser.add_argument("-m", action='store_true', dest="mirror", default=False, help="mirror exit code of remote command")
|
||||
|
|
146
RNS/__init__.py
|
@ -1,6 +1,6 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
|
||||
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
|
@ -32,11 +32,16 @@ from ._version import __version__
|
|||
from .Reticulum import Reticulum
|
||||
from .Identity import Identity
|
||||
from .Link import Link, RequestReceipt
|
||||
from .Channel import MessageBase
|
||||
from .Buffer import Buffer, RawChannelReader, RawChannelWriter
|
||||
from .Transport import Transport
|
||||
from .Destination import Destination
|
||||
from .Packet import Packet
|
||||
from .Packet import PacketReceipt
|
||||
from .Resolver import Resolver
|
||||
from .Resource import Resource, ResourceAdvertisement
|
||||
from .Cryptography import HKDF
|
||||
from .Cryptography import Hashes
|
||||
|
||||
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
|
||||
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]
|
||||
|
@ -55,10 +60,11 @@ LOG_FILE = 0x92
|
|||
|
||||
LOG_MAXSIZE = 5*1024*1024
|
||||
|
||||
loglevel = LOG_NOTICE
|
||||
logfile = None
|
||||
logdest = LOG_STDOUT
|
||||
logtimefmt = "%Y-%m-%d %H:%M:%S"
|
||||
loglevel = LOG_NOTICE
|
||||
logfile = None
|
||||
logdest = LOG_STDOUT
|
||||
logtimefmt = "%Y-%m-%d %H:%M:%S"
|
||||
compact_log_fmt = False
|
||||
|
||||
instance_random = random.Random()
|
||||
instance_random.seed(os.urandom(10))
|
||||
|
@ -99,10 +105,14 @@ def timestamp_str(time_s):
|
|||
return time.strftime(logtimefmt, timestamp)
|
||||
|
||||
def log(msg, level=3, _override_destination = False):
|
||||
global _always_override_destination
|
||||
|
||||
global _always_override_destination, compact_log_fmt
|
||||
msg = str(msg)
|
||||
if loglevel >= level:
|
||||
logstring = "["+timestamp_str(time.time())+"] ["+loglevelname(level)+"] "+msg
|
||||
if not compact_log_fmt:
|
||||
logstring = "["+timestamp_str(time.time())+"] ["+loglevelname(level)+"] "+msg
|
||||
else:
|
||||
logstring = "["+timestamp_str(time.time())+"] "+msg
|
||||
|
||||
logging_lock.acquire()
|
||||
|
||||
if (logdest == LOG_STDOUT or _always_override_destination or _override_destination):
|
||||
|
@ -134,6 +144,12 @@ def rand():
|
|||
result = instance_random.random()
|
||||
return result
|
||||
|
||||
def trace_exception(e):
|
||||
import traceback
|
||||
exception_info = "".join(traceback.TracebackException.from_exception(e).format())
|
||||
log(f"An unhandled {str(type(e))} exception occurred: {str(e)}", LOG_ERROR)
|
||||
log(exception_info, LOG_ERROR)
|
||||
|
||||
def hexrep(data, delimit=True):
|
||||
try:
|
||||
iter(data)
|
||||
|
@ -151,9 +167,121 @@ def prettyhexrep(data):
|
|||
hexrep = "<"+delimiter.join("{:02x}".format(c) for c in data)+">"
|
||||
return hexrep
|
||||
|
||||
def prettyspeed(num, suffix="b"):
|
||||
return prettysize(num/8, suffix=suffix)+"ps"
|
||||
|
||||
def prettysize(num, suffix='B'):
|
||||
units = ['','K','M','G','T','P','E','Z']
|
||||
last_unit = 'Y'
|
||||
|
||||
if suffix == 'b':
|
||||
num *= 8
|
||||
units = ['','K','M','G','T','P','E','Z']
|
||||
last_unit = 'Y'
|
||||
|
||||
for unit in units:
|
||||
if abs(num) < 1000.0:
|
||||
if unit == "":
|
||||
return "%.0f %s%s" % (num, unit, suffix)
|
||||
else:
|
||||
return "%.2f %s%s" % (num, unit, suffix)
|
||||
num /= 1000.0
|
||||
|
||||
return "%.2f%s%s" % (num, last_unit, suffix)
|
||||
|
||||
def prettyfrequency(hz, suffix="Hz"):
|
||||
num = hz*1e6
|
||||
units = ["µ", "m", "", "K","M","G","T","P","E","Z"]
|
||||
last_unit = "Y"
|
||||
|
||||
for unit in units:
|
||||
if abs(num) < 1000.0:
|
||||
return "%.2f %s%s" % (num, unit, suffix)
|
||||
num /= 1000.0
|
||||
|
||||
return "%.2f%s%s" % (num, last_unit, suffix)
|
||||
|
||||
def prettydistance(m, suffix="m"):
|
||||
num = m*1e6
|
||||
units = ["µ", "m", "c", ""]
|
||||
last_unit = "K"
|
||||
|
||||
for unit in units:
|
||||
divisor = 1000.0
|
||||
if unit == "m": divisor = 10
|
||||
if unit == "c": divisor = 100
|
||||
|
||||
if abs(num) < divisor:
|
||||
return "%.2f %s%s" % (num, unit, suffix)
|
||||
num /= divisor
|
||||
|
||||
return "%.2f %s%s" % (num, last_unit, suffix)
|
||||
|
||||
def prettytime(time, verbose=False, compact=False):
|
||||
days = int(time // (24 * 3600))
|
||||
time = time % (24 * 3600)
|
||||
hours = int(time // 3600)
|
||||
time %= 3600
|
||||
minutes = int(time // 60)
|
||||
time %= 60
|
||||
if compact:
|
||||
seconds = int(time)
|
||||
else:
|
||||
seconds = round(time, 2)
|
||||
|
||||
ss = "" if seconds == 1 else "s"
|
||||
sm = "" if minutes == 1 else "s"
|
||||
sh = "" if hours == 1 else "s"
|
||||
sd = "" if days == 1 else "s"
|
||||
|
||||
displayed = 0
|
||||
components = []
|
||||
if days > 0 and ((not compact) or displayed < 2):
|
||||
components.append(str(days)+" day"+sd if verbose else str(days)+"d")
|
||||
displayed += 1
|
||||
|
||||
if hours > 0 and ((not compact) or displayed < 2):
|
||||
components.append(str(hours)+" hour"+sh if verbose else str(hours)+"h")
|
||||
displayed += 1
|
||||
|
||||
if minutes > 0 and ((not compact) or displayed < 2):
|
||||
components.append(str(minutes)+" minute"+sm if verbose else str(minutes)+"m")
|
||||
displayed += 1
|
||||
|
||||
if seconds > 0 and ((not compact) or displayed < 2):
|
||||
components.append(str(seconds)+" second"+ss if verbose else str(seconds)+"s")
|
||||
displayed += 1
|
||||
|
||||
i = 0
|
||||
tstr = ""
|
||||
for c in components:
|
||||
i += 1
|
||||
if i == 1:
|
||||
pass
|
||||
elif i < len(components):
|
||||
tstr += ", "
|
||||
elif i == len(components):
|
||||
tstr += " and "
|
||||
|
||||
tstr += c
|
||||
|
||||
if tstr == "":
|
||||
return "0s"
|
||||
else:
|
||||
return tstr
|
||||
|
||||
def phyparams():
|
||||
print("Required Physical Layer MTU : "+str(Reticulum.MTU)+" bytes")
|
||||
print("Plaintext Packet MDU : "+str(Packet.PLAIN_MDU)+" bytes")
|
||||
print("Encrypted Packet MDU : "+str(Packet.ENCRYPTED_MDU)+" bytes")
|
||||
print("Link Curve : "+str(Link.CURVE))
|
||||
print("Link Packet MDU : "+str(Link.MDU)+" bytes")
|
||||
print("Link Public Key Size : "+str(Link.ECPUBSIZE*8)+" bits")
|
||||
print("Link Private Key Size : "+str(Link.KEYSIZE*8)+" bits")
|
||||
|
||||
def panic():
|
||||
os._exit(255)
|
||||
|
||||
def exit():
|
||||
print("")
|
||||
sys.exit(0)
|
||||
sys.exit(0)
|
||||
|
|
|
@ -1 +1 @@
|
|||
__version__ = "0.3.7"
|
||||
__version__ = "0.7.5"
|
||||
|
|
|
@ -147,6 +147,7 @@ class ServerTunnel(I2PTunnel):
|
|||
except Exception as e:
|
||||
self.status["exception"] = e
|
||||
self.status["setup_failed"] = True
|
||||
data = None
|
||||
|
||||
try:
|
||||
sc_task = asyncio.wait_for(
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
# Copyright (c) 2014 Stefan C. Mueller
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to
|
||||
# deal in the Software without restriction, including without limitation the
|
||||
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
# sell copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
# IN THE SOFTWARE.
|
||||
|
||||
|
||||
import os
|
||||
|
||||
from RNS.vendor.ifaddr._shared import Adapter, IP
|
||||
|
||||
if os.name == "nt":
|
||||
from RNS.vendor.ifaddr._win32 import get_adapters
|
||||
elif os.name == "posix":
|
||||
from RNS.vendor.ifaddr._posix import get_adapters
|
||||
else:
|
||||
raise RuntimeError("Unsupported Operating System: %s" % os.name)
|
||||
|
||||
__all__ = ['Adapter', 'IP', 'get_adapters']
|
|
@ -0,0 +1,93 @@
|
|||
# Copyright (c) 2014 Stefan C. Mueller
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to
|
||||
# deal in the Software without restriction, including without limitation the
|
||||
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
# sell copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
# IN THE SOFTWARE.
|
||||
|
||||
|
||||
import os
|
||||
import ctypes.util
|
||||
import ipaddress
|
||||
import collections
|
||||
import socket
|
||||
|
||||
from typing import Iterable, Optional
|
||||
|
||||
import RNS.vendor.ifaddr._shared as shared
|
||||
|
||||
class ifaddrs(ctypes.Structure):
|
||||
pass
|
||||
|
||||
|
||||
ifaddrs._fields_ = [
|
||||
('ifa_next', ctypes.POINTER(ifaddrs)),
|
||||
('ifa_name', ctypes.c_char_p),
|
||||
('ifa_flags', ctypes.c_uint),
|
||||
('ifa_addr', ctypes.POINTER(shared.sockaddr)),
|
||||
('ifa_netmask', ctypes.POINTER(shared.sockaddr)),
|
||||
]
|
||||
|
||||
libc = ctypes.CDLL(ctypes.util.find_library("socket" if os.uname()[0] == "SunOS" else "c"), use_errno=True) # type: ignore
|
||||
|
||||
|
||||
def get_adapters(include_unconfigured: bool = False) -> Iterable[shared.Adapter]:
|
||||
|
||||
addr0 = addr = ctypes.POINTER(ifaddrs)()
|
||||
retval = libc.getifaddrs(ctypes.byref(addr))
|
||||
if retval != 0:
|
||||
eno = ctypes.get_errno()
|
||||
raise OSError(eno, os.strerror(eno))
|
||||
|
||||
ips = collections.OrderedDict()
|
||||
|
||||
def add_ip(adapter_name: str, ip: Optional[shared.IP]) -> None:
|
||||
if adapter_name not in ips:
|
||||
index = None # type: Optional[int]
|
||||
try:
|
||||
# Mypy errors on this when the Windows CI runs:
|
||||
# error: Module has no attribute "if_nametoindex"
|
||||
index = socket.if_nametoindex(adapter_name) # type: ignore
|
||||
except (OSError, AttributeError):
|
||||
pass
|
||||
ips[adapter_name] = shared.Adapter(adapter_name, adapter_name, [], index=index)
|
||||
if ip is not None:
|
||||
ips[adapter_name].ips.append(ip)
|
||||
|
||||
while addr:
|
||||
name = addr[0].ifa_name.decode(encoding='UTF-8')
|
||||
ip_addr = shared.sockaddr_to_ip(addr[0].ifa_addr)
|
||||
if ip_addr:
|
||||
if addr[0].ifa_netmask and not addr[0].ifa_netmask[0].sa_familiy:
|
||||
addr[0].ifa_netmask[0].sa_familiy = addr[0].ifa_addr[0].sa_familiy
|
||||
netmask = shared.sockaddr_to_ip(addr[0].ifa_netmask)
|
||||
if isinstance(netmask, tuple):
|
||||
netmaskStr = str(netmask[0])
|
||||
prefixlen = shared.ipv6_prefixlength(ipaddress.IPv6Address(netmaskStr))
|
||||
else:
|
||||
assert netmask is not None, f'sockaddr_to_ip({addr[0].ifa_netmask}) returned None'
|
||||
netmaskStr = str('0.0.0.0/' + netmask)
|
||||
prefixlen = ipaddress.IPv4Network(netmaskStr).prefixlen
|
||||
ip = shared.IP(ip_addr, prefixlen, name)
|
||||
add_ip(name, ip)
|
||||
else:
|
||||
if include_unconfigured:
|
||||
add_ip(name, None)
|
||||
addr = addr[0].ifa_next
|
||||
|
||||
libc.freeifaddrs(addr0)
|
||||
|
||||
return ips.values()
|
|
@ -0,0 +1,198 @@
|
|||
# Copyright (c) 2014 Stefan C. Mueller
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to
|
||||
# deal in the Software without restriction, including without limitation the
|
||||
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
# sell copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
# IN THE SOFTWARE.
|
||||
|
||||
|
||||
import ctypes
|
||||
import socket
|
||||
import ipaddress
|
||||
import platform
|
||||
|
||||
from typing import List, Optional, Tuple, Union
|
||||
|
||||
class Adapter(object):
|
||||
"""
|
||||
Represents a network interface device controller (NIC), such as a
|
||||
network card. An adapter can have multiple IPs.
|
||||
|
||||
On Linux aliasing (multiple IPs per physical NIC) is implemented
|
||||
by creating 'virtual' adapters, each represented by an instance
|
||||
of this class. Each of those 'virtual' adapters can have both
|
||||
a IPv4 and an IPv6 IP address.
|
||||
"""
|
||||
|
||||
def __init__(self, name: str, nice_name: str, ips: List['IP'], index: Optional[int] = None) -> None:
|
||||
|
||||
#: Unique name that identifies the adapter in the system.
|
||||
#: On Linux this is of the form of `eth0` or `eth0:1`, on
|
||||
#: Windows it is a UUID in string representation, such as
|
||||
#: `{846EE342-7039-11DE-9D20-806E6F6E6963}`.
|
||||
self.name = name
|
||||
|
||||
#: Human readable name of the adpater. On Linux this
|
||||
#: is currently the same as :attr:`name`. On Windows
|
||||
#: this is the name of the device.
|
||||
self.nice_name = nice_name
|
||||
|
||||
#: List of :class:`ifaddr.IP` instances in the order they were
|
||||
#: reported by the system.
|
||||
self.ips = ips
|
||||
|
||||
#: Adapter index as used by some API (e.g. IPv6 multicast group join).
|
||||
self.index = index
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return "Adapter(name={name}, nice_name={nice_name}, ips={ips}, index={index})".format(
|
||||
name=repr(self.name), nice_name=repr(self.nice_name), ips=repr(self.ips), index=repr(self.index)
|
||||
)
|
||||
|
||||
|
||||
# Type of an IPv4 address (a string in "xxx.xxx.xxx.xxx" format)
|
||||
_IPv4Address = str
|
||||
|
||||
# Type of an IPv6 address (a three-tuple `(ip, flowinfo, scope_id)`)
|
||||
_IPv6Address = Tuple[str, int, int]
|
||||
|
||||
|
||||
class IP(object):
|
||||
"""
|
||||
Represents an IP address of an adapter.
|
||||
"""
|
||||
|
||||
def __init__(self, ip: Union[_IPv4Address, _IPv6Address], network_prefix: int, nice_name: str) -> None:
|
||||
|
||||
#: IP address. For IPv4 addresses this is a string in
|
||||
#: "xxx.xxx.xxx.xxx" format. For IPv6 addresses this
|
||||
#: is a three-tuple `(ip, flowinfo, scope_id)`, where
|
||||
#: `ip` is a string in the usual collon separated
|
||||
#: hex format.
|
||||
self.ip = ip
|
||||
|
||||
#: Number of bits of the IP that represent the
|
||||
#: network. For a `255.255.255.0` netmask, this
|
||||
#: number would be `24`.
|
||||
self.network_prefix = network_prefix
|
||||
|
||||
#: Human readable name for this IP.
|
||||
#: On Linux is this currently the same as the adapter name.
|
||||
#: On Windows this is the name of the network connection
|
||||
#: as configured in the system control panel.
|
||||
self.nice_name = nice_name
|
||||
|
||||
@property
|
||||
def is_IPv4(self) -> bool:
|
||||
"""
|
||||
Returns `True` if this IP is an IPv4 address and `False`
|
||||
if it is an IPv6 address.
|
||||
"""
|
||||
return not isinstance(self.ip, tuple)
|
||||
|
||||
@property
|
||||
def is_IPv6(self) -> bool:
|
||||
"""
|
||||
Returns `True` if this IP is an IPv6 address and `False`
|
||||
if it is an IPv4 address.
|
||||
"""
|
||||
return isinstance(self.ip, tuple)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return "IP(ip={ip}, network_prefix={network_prefix}, nice_name={nice_name})".format(
|
||||
ip=repr(self.ip), network_prefix=repr(self.network_prefix), nice_name=repr(self.nice_name)
|
||||
)
|
||||
|
||||
|
||||
if platform.system() == "Darwin" or "BSD" in platform.system():
|
||||
|
||||
# BSD derived systems use marginally different structures
|
||||
# than either Linux or Windows.
|
||||
# I still keep it in `shared` since we can use
|
||||
# both structures equally.
|
||||
|
||||
class sockaddr(ctypes.Structure):
|
||||
_fields_ = [
|
||||
('sa_len', ctypes.c_uint8),
|
||||
('sa_familiy', ctypes.c_uint8),
|
||||
('sa_data', ctypes.c_uint8 * 14),
|
||||
]
|
||||
|
||||
class sockaddr_in(ctypes.Structure):
|
||||
_fields_ = [
|
||||
('sa_len', ctypes.c_uint8),
|
||||
('sa_familiy', ctypes.c_uint8),
|
||||
('sin_port', ctypes.c_uint16),
|
||||
('sin_addr', ctypes.c_uint8 * 4),
|
||||
('sin_zero', ctypes.c_uint8 * 8),
|
||||
]
|
||||
|
||||
class sockaddr_in6(ctypes.Structure):
|
||||
_fields_ = [
|
||||
('sa_len', ctypes.c_uint8),
|
||||
('sa_familiy', ctypes.c_uint8),
|
||||
('sin6_port', ctypes.c_uint16),
|
||||
('sin6_flowinfo', ctypes.c_uint32),
|
||||
('sin6_addr', ctypes.c_uint8 * 16),
|
||||
('sin6_scope_id', ctypes.c_uint32),
|
||||
]
|
||||
|
||||
else:
|
||||
|
||||
class sockaddr(ctypes.Structure): # type: ignore
|
||||
_fields_ = [('sa_familiy', ctypes.c_uint16), ('sa_data', ctypes.c_uint8 * 14)]
|
||||
|
||||
class sockaddr_in(ctypes.Structure): # type: ignore
|
||||
_fields_ = [
|
||||
('sin_familiy', ctypes.c_uint16),
|
||||
('sin_port', ctypes.c_uint16),
|
||||
('sin_addr', ctypes.c_uint8 * 4),
|
||||
('sin_zero', ctypes.c_uint8 * 8),
|
||||
]
|
||||
|
||||
class sockaddr_in6(ctypes.Structure): # type: ignore
|
||||
_fields_ = [
|
||||
('sin6_familiy', ctypes.c_uint16),
|
||||
('sin6_port', ctypes.c_uint16),
|
||||
('sin6_flowinfo', ctypes.c_uint32),
|
||||
('sin6_addr', ctypes.c_uint8 * 16),
|
||||
('sin6_scope_id', ctypes.c_uint32),
|
||||
]
|
||||
|
||||
|
||||
def sockaddr_to_ip(sockaddr_ptr: 'ctypes.pointer[sockaddr]') -> Optional[Union[_IPv4Address, _IPv6Address]]:
|
||||
if sockaddr_ptr:
|
||||
if sockaddr_ptr[0].sa_familiy == socket.AF_INET:
|
||||
ipv4 = ctypes.cast(sockaddr_ptr, ctypes.POINTER(sockaddr_in))
|
||||
ippacked = bytes(bytearray(ipv4[0].sin_addr))
|
||||
ip = str(ipaddress.ip_address(ippacked))
|
||||
return ip
|
||||
elif sockaddr_ptr[0].sa_familiy == socket.AF_INET6:
|
||||
ipv6 = ctypes.cast(sockaddr_ptr, ctypes.POINTER(sockaddr_in6))
|
||||
flowinfo = ipv6[0].sin6_flowinfo
|
||||
ippacked = bytes(bytearray(ipv6[0].sin6_addr))
|
||||
ip = str(ipaddress.ip_address(ippacked))
|
||||
scope_id = ipv6[0].sin6_scope_id
|
||||
return (ip, flowinfo, scope_id)
|
||||
return None
|
||||
|
||||
|
||||
def ipv6_prefixlength(address: ipaddress.IPv6Address) -> int:
|
||||
prefix_length = 0
|
||||
for i in range(address.max_prefixlen):
|
||||
if int(address) >> i & 1:
|
||||
prefix_length = prefix_length + 1
|
||||
return prefix_length
|
|
@ -0,0 +1,145 @@
|
|||
# Copyright (c) 2014 Stefan C. Mueller
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to
|
||||
# deal in the Software without restriction, including without limitation the
|
||||
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
# sell copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
# IN THE SOFTWARE.
|
||||
|
||||
|
||||
import ctypes
|
||||
from ctypes import wintypes
|
||||
from typing import Iterable, List
|
||||
|
||||
import RNS.vendor.ifaddr._shared as shared
|
||||
|
||||
NO_ERROR = 0
|
||||
ERROR_BUFFER_OVERFLOW = 111
|
||||
MAX_ADAPTER_NAME_LENGTH = 256
|
||||
MAX_ADAPTER_DESCRIPTION_LENGTH = 128
|
||||
MAX_ADAPTER_ADDRESS_LENGTH = 8
|
||||
AF_UNSPEC = 0
|
||||
|
||||
|
||||
class SOCKET_ADDRESS(ctypes.Structure):
|
||||
_fields_ = [('lpSockaddr', ctypes.POINTER(shared.sockaddr)), ('iSockaddrLength', wintypes.INT)]
|
||||
|
||||
|
||||
class IP_ADAPTER_UNICAST_ADDRESS(ctypes.Structure):
|
||||
pass
|
||||
|
||||
|
||||
IP_ADAPTER_UNICAST_ADDRESS._fields_ = [
|
||||
('Length', wintypes.ULONG),
|
||||
('Flags', wintypes.DWORD),
|
||||
('Next', ctypes.POINTER(IP_ADAPTER_UNICAST_ADDRESS)),
|
||||
('Address', SOCKET_ADDRESS),
|
||||
('PrefixOrigin', ctypes.c_uint),
|
||||
('SuffixOrigin', ctypes.c_uint),
|
||||
('DadState', ctypes.c_uint),
|
||||
('ValidLifetime', wintypes.ULONG),
|
||||
('PreferredLifetime', wintypes.ULONG),
|
||||
('LeaseLifetime', wintypes.ULONG),
|
||||
('OnLinkPrefixLength', ctypes.c_uint8),
|
||||
]
|
||||
|
||||
|
||||
class IP_ADAPTER_ADDRESSES(ctypes.Structure):
|
||||
pass
|
||||
|
||||
|
||||
IP_ADAPTER_ADDRESSES._fields_ = [
|
||||
('Length', wintypes.ULONG),
|
||||
('IfIndex', wintypes.DWORD),
|
||||
('Next', ctypes.POINTER(IP_ADAPTER_ADDRESSES)),
|
||||
('AdapterName', ctypes.c_char_p),
|
||||
('FirstUnicastAddress', ctypes.POINTER(IP_ADAPTER_UNICAST_ADDRESS)),
|
||||
('FirstAnycastAddress', ctypes.c_void_p),
|
||||
('FirstMulticastAddress', ctypes.c_void_p),
|
||||
('FirstDnsServerAddress', ctypes.c_void_p),
|
||||
('DnsSuffix', ctypes.c_wchar_p),
|
||||
('Description', ctypes.c_wchar_p),
|
||||
('FriendlyName', ctypes.c_wchar_p),
|
||||
]
|
||||
|
||||
|
||||
iphlpapi = ctypes.windll.LoadLibrary("Iphlpapi") # type: ignore
|
||||
|
||||
|
||||
def enumerate_interfaces_of_adapter(
|
||||
nice_name: str, address: IP_ADAPTER_UNICAST_ADDRESS
|
||||
) -> Iterable[shared.IP]:
|
||||
|
||||
# Iterate through linked list and fill list
|
||||
addresses = [] # type: List[IP_ADAPTER_UNICAST_ADDRESS]
|
||||
while True:
|
||||
addresses.append(address)
|
||||
if not address.Next:
|
||||
break
|
||||
address = address.Next[0]
|
||||
|
||||
for address in addresses:
|
||||
ip = shared.sockaddr_to_ip(address.Address.lpSockaddr)
|
||||
assert ip is not None, f'sockaddr_to_ip({address.Address.lpSockaddr}) returned None'
|
||||
network_prefix = address.OnLinkPrefixLength
|
||||
yield shared.IP(ip, network_prefix, nice_name)
|
||||
|
||||
|
||||
def get_adapters(include_unconfigured: bool = False) -> Iterable[shared.Adapter]:
|
||||
|
||||
# Call GetAdaptersAddresses() with error and buffer size handling
|
||||
|
||||
addressbuffersize = wintypes.ULONG(15 * 1024)
|
||||
retval = ERROR_BUFFER_OVERFLOW
|
||||
while retval == ERROR_BUFFER_OVERFLOW:
|
||||
addressbuffer = ctypes.create_string_buffer(addressbuffersize.value)
|
||||
retval = iphlpapi.GetAdaptersAddresses(
|
||||
wintypes.ULONG(AF_UNSPEC),
|
||||
wintypes.ULONG(0),
|
||||
None,
|
||||
ctypes.byref(addressbuffer),
|
||||
ctypes.byref(addressbuffersize),
|
||||
)
|
||||
if retval != NO_ERROR:
|
||||
raise ctypes.WinError() # type: ignore
|
||||
|
||||
# Iterate through adapters fill array
|
||||
address_infos = [] # type: List[IP_ADAPTER_ADDRESSES]
|
||||
address_info = IP_ADAPTER_ADDRESSES.from_buffer(addressbuffer)
|
||||
while True:
|
||||
address_infos.append(address_info)
|
||||
if not address_info.Next:
|
||||
break
|
||||
address_info = address_info.Next[0]
|
||||
|
||||
# Iterate through unicast addresses
|
||||
result = [] # type: List[shared.Adapter]
|
||||
for adapter_info in address_infos:
|
||||
|
||||
# We don't expect non-ascii characters here, so encoding shouldn't matter
|
||||
name = adapter_info.AdapterName.decode()
|
||||
nice_name = adapter_info.Description
|
||||
index = adapter_info.IfIndex
|
||||
|
||||
if adapter_info.FirstUnicastAddress:
|
||||
ips = enumerate_interfaces_of_adapter(
|
||||
adapter_info.FriendlyName, adapter_info.FirstUnicastAddress[0]
|
||||
)
|
||||
ips = list(ips)
|
||||
result.append(shared.Adapter(name, nice_name, ips, index=index))
|
||||
elif include_unconfigured:
|
||||
result.append(shared.Adapter(name, nice_name, [], index=index))
|
||||
|
||||
return result
|
|
@ -0,0 +1,57 @@
|
|||
import ipaddress
|
||||
import RNS.vendor.ifaddr
|
||||
import socket
|
||||
|
||||
from typing import List
|
||||
|
||||
AF_INET6 = socket.AF_INET6.value
|
||||
AF_INET = socket.AF_INET.value
|
||||
|
||||
def interfaces() -> List[str]:
|
||||
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
|
||||
return [a.name for a in adapters]
|
||||
|
||||
def interface_names_to_indexes() -> dict:
|
||||
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
|
||||
results = {}
|
||||
for adapter in adapters:
|
||||
results[adapter.name] = adapter.index
|
||||
return results
|
||||
|
||||
def interface_name_to_nice_name(ifname) -> str:
|
||||
try:
|
||||
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
|
||||
for adapter in adapters:
|
||||
if adapter.name == ifname:
|
||||
if hasattr(adapter, "nice_name"):
|
||||
return adapter.nice_name
|
||||
except:
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
def ifaddresses(ifname) -> dict:
|
||||
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
|
||||
ifa = {}
|
||||
for a in adapters:
|
||||
if a.name == ifname:
|
||||
ipv4s = []
|
||||
ipv6s = []
|
||||
for ip in a.ips:
|
||||
t = {}
|
||||
if ip.is_IPv4:
|
||||
net = ipaddress.ip_network(str(ip.ip)+"/"+str(ip.network_prefix), strict=False)
|
||||
t["addr"] = ip.ip
|
||||
t["prefix"] = ip.network_prefix
|
||||
t["broadcast"] = str(net.broadcast_address)
|
||||
ipv4s.append(t)
|
||||
if ip.is_IPv6:
|
||||
t["addr"] = ip.ip[0]
|
||||
ipv6s.append(t)
|
||||
|
||||
if len(ipv4s) > 0:
|
||||
ifa[AF_INET] = ipv4s
|
||||
if len(ipv6s) > 0:
|
||||
ifa[AF_INET6] = ipv6s
|
||||
|
||||
return ifa
|
|
@ -8,6 +8,12 @@ def get_platform():
|
|||
import sys
|
||||
return sys.platform
|
||||
|
||||
def is_linux():
|
||||
if get_platform() == "linux":
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def is_darwin():
|
||||
if get_platform() == "darwin":
|
||||
return True
|
||||
|
@ -42,4 +48,4 @@ def cryptography_old_api():
|
|||
if cryptography.__version__ == "2.8":
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
return False
|
||||
|
|
|
@ -0,0 +1,112 @@
|
|||
# Reticulum Development Roadmap
|
||||
This document outlines the currently established development roadmap for Reticulum.
|
||||
|
||||
1. [Currently Active Work Areas](#currently-active-work-areas)
|
||||
2. [Primary Efforts](#primary-efforts)
|
||||
- [Comprehensibility](#comprehensibility)
|
||||
- [Universality](#universality)
|
||||
- [Functionality](#functionality)
|
||||
- [Usability & Utility](#usability--utility)
|
||||
- [Interfaceability](#interfaceability)
|
||||
3. [Auxillary Efforts](#auxillary-efforts)
|
||||
4. [Release History](#release-history)
|
||||
|
||||
## Currently Active Work Areas
|
||||
For each release cycle of Reticulum, improvements and additions from the five [Primary Efforts](#primary-efforts) are selected as active work areas, and can be expected to be included in the upcoming releases within that cycle. While not entirely set in stone for each release cycle, they serve as a pointer of what to expect in the near future.
|
||||
|
||||
- The current `0.7.x` release cycle aims at completing
|
||||
- [ ] Overhauling and updating the documentation
|
||||
- [ ] Distributed Destination Naming System
|
||||
- [ ] Create a standalone RNS Daemon app for Android
|
||||
- [ ] Network-wide path balancing
|
||||
- [ ] Add automatic retries to all use cases of the `Request` API
|
||||
- [ ] Performance and memory optimisations of the Python reference implementation
|
||||
- [ ] Fixing bugs discovered while operating Reticulum systems and applications
|
||||
|
||||
## Primary Efforts
|
||||
The development path for Reticulum is currently laid out in five distinct areas: *Comprehensibility*, *Universality*, *Functionality*, *Usability & Utility* and *Interfaceability*. Conceptualising the development of Reticulum into these areas serves to advance the implementation and work towards the Foundational Goals & Values of Reticulum.
|
||||
|
||||
### Comprehensibility
|
||||
These efforts are aimed at improving the ease of which Reticulum is understood, and lowering the barrier to entry for people who wish to start building systems on Reticulum.
|
||||
|
||||
- Improving [the manual](https://markqvist.github.io/Reticulum/manual/) with tutorials specifically for beginners
|
||||
- Updating the documentation to reflect recent changes and improvements
|
||||
- Update descriptions of protocol mechanics
|
||||
- Update announce description
|
||||
- Add in-depth explanation of the IFAC system
|
||||
- Software
|
||||
- Update Sideband screenshots
|
||||
- Update Sideband description
|
||||
- Update NomadNet screenshots
|
||||
- Update Sideband screenshots
|
||||
- Installation
|
||||
- [x] Add a *Reticulum On Raspberry Pi* section
|
||||
- [x] Update *Reticulum On Android* section if necessary
|
||||
- [x] Update Android install documentation.
|
||||
- Communications hardware section
|
||||
- Add information about RNode external displays.
|
||||
- [x] Packet radio modems.
|
||||
- Possibly add other relevant types here as well.
|
||||
- Setup *Best Practices For...* / *Installation Examples* section.
|
||||
- Home or office (example)
|
||||
- Vehicles (example)
|
||||
- No-grid/solar/remote sites (example)
|
||||
|
||||
### Universality
|
||||
These efforts seek to broaden the universality of the Reticulum software and hardware ecosystem by continously diversifying platform support, and by improving the overall availability and ease of deployment of the Reticulum stack.
|
||||
|
||||
- OpenWRT support
|
||||
- Create a standalone RNS Daemon app for Android
|
||||
- A lightweight and portable C implementation for microcontrollers, µRNS
|
||||
- A portable, high-performance Reticulum implementation in C/C++, see [#21](https://github.com/markqvist/Reticulum/discussions/21)
|
||||
- Performance and memory optimisations of the Python implementation
|
||||
- Bindings for other programming languages
|
||||
|
||||
### Functionality
|
||||
These efforts aim to expand and improve the core functionality and reliability of Reticulum.
|
||||
|
||||
- Add automatic retries to all use cases of the `Request` API
|
||||
- Network-wide path balancing
|
||||
- Distributed Destination Naming System
|
||||
- Globally routable multicast
|
||||
- Destination proxying
|
||||
- [Metric-based path selection and multiple paths](https://github.com/markqvist/Reticulum/discussions/86)
|
||||
|
||||
### Usability & Utility
|
||||
These effors seek to make Reticulum easier to use and operate, and to expand the utility of the stack on deployed systems.
|
||||
|
||||
- Easy way to share interface configurations, see [#19](https://github.com/markqvist/Reticulum/discussions/19)
|
||||
- Transit traffic display in rnstatus
|
||||
- rnsconfig utility
|
||||
|
||||
### Interfaceability
|
||||
These efforts aim to expand the types of physical and virtual interfaces that Reticulum can natively use to transport data.
|
||||
|
||||
- Filesystem interface
|
||||
- Plain ESP32 devices (ESP-Now, WiFi, Bluetooth, etc.)
|
||||
- More LoRa transceivers
|
||||
- AT-compatible modems
|
||||
- Direct SDR Support
|
||||
- Optical mediums
|
||||
- IR Transceivers
|
||||
- AWDL / OWL
|
||||
- HF Modems
|
||||
- GNU Radio
|
||||
- CAN-bus
|
||||
- Raw SPI
|
||||
- Raw i²c
|
||||
- MQTT
|
||||
- XBee
|
||||
- Tor
|
||||
|
||||
## Auxillary Efforts
|
||||
The Reticulum ecosystem is enriched by several other software and hardware projects, and the support and improvement of these, in symbiosis with the core Reticulum project helps expand the reach and utility of Reticulum itself.
|
||||
|
||||
This section lists, in no particular order, various important efforts that would be beneficial to the goals of Reticulum.
|
||||
|
||||
- The [RNode](https://unsigned.io/rnode/) project
|
||||
- [ ] Create a WebUSB-based bootstrapping utility, and integrate this directly into the [RNode Bootstrap Console](#), both on-device, and on an Internet-reachable copy. This will make it much easier to create new RNodes for average users.
|
||||
|
||||
## Release History
|
||||
|
||||
Please see the [Changelog](./Changelog.md) for a complete release history and changelog of Reticulum.
|
|
@ -30,3 +30,8 @@ help:
|
|||
cp -r build/latex/reticulumnetworkstack.pdf ./Reticulum\ Manual.pdf; \
|
||||
echo "PDF Manual Generated"; \
|
||||
fi
|
||||
|
||||
@if [ $@ = "epub" ]; then \
|
||||
cp -r build/epub/ReticulumNetworkStack.epub ./Reticulum\ Manual.epub; \
|
||||
echo "EPUB Manual Generated"; \
|
||||
fi
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Sphinx build info version 1
|
||||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
|
||||
config: 3ea52ff0bfd9431c8886e9a105e9d835
|
||||
config: e6cf914f5d96347d4f64d2e8bcefb841
|
||||
tags: 645f666f9bcd5a90fca523b33c5a78b7
|
||||
|
|
After Width: | Height: | Size: 191 KiB |
After Width: | Height: | Size: 259 KiB |
After Width: | Height: | Size: 562 KiB |
After Width: | Height: | Size: 249 KiB |
After Width: | Height: | Size: 236 KiB |
After Width: | Height: | Size: 184 KiB |
Before Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 124 KiB |
After Width: | Height: | Size: 162 KiB |
After Width: | Height: | Size: 236 KiB |