Compare commits

...

1010 Commits

Author SHA1 Message Date
Mark Qvist ba2feaa211 Updated changelog 2024-05-18 18:51:17 +02:00
Mark Qvist 097d2b0dd9 Updated changelog 2024-05-18 18:48:32 +02:00
Mark Qvist bb0ce4faca Added T3S3 flashing, fixed Heltec V3 autoinstaller menu 2024-05-18 18:40:21 +02:00
Mark Qvist 5915228f5b Updated documentation 2024-05-18 18:38:06 +02:00
Mark Qvist 0b66649158 Avoid nRF52 hard reset after EEPROM wipe 2024-05-18 00:18:54 +02:00
markqvist e28dd6e14a
Merge pull request #502 from jacobeva/master
Extend RAK4631 support
2024-05-18 00:15:48 +02:00
markqvist 0a15b4c6c1
Merge branch 'master' into master 2024-05-18 00:15:13 +02:00
markqvist 62db09571d
Merge pull request #504 from jacobeva/hash-feature
Add ability to get target and calculated firmware hash from device
2024-05-18 00:04:24 +02:00
Mark Qvist 444ae0206b Added better handling on Windows of interfaces that are non-adoptable for AutoInterface 2024-05-17 23:54:48 +02:00
Mark Qvist 4b07e30b9d Updated version 2024-05-17 23:54:04 +02:00
markqvist 583e65419e
Merge pull request #506 from liamcottle/feature/windows-auto-interface
Implement AutoInterface support on Windows
2024-05-17 16:32:33 +02:00
liamcottle 1564930a51 auto interface working on windows 2024-05-17 04:09:11 +12:00
markqvist b81b1de4eb
Merge pull request #500 from faragher/master
Windows DTR timing fix
2024-05-15 20:06:36 +02:00
jacob.eva 746a38f818
Add ability to get target and calculated firmware hash from device 2024-05-13 22:55:49 +01:00
jacob.eva c230eceaa6
Extend RAK4631 support 2024-05-13 21:49:57 +01:00
Mark Qvist 09d9285104 Allow recursive path resolution for clients on roaming-mode interfaces 2024-05-12 12:31:51 +02:00
faragher 3551662187 Changing log levels 2024-05-08 02:19:59 -05:00
faragher f7f34e0ea3 Windows DTR timing adjustments 2024-05-08 02:14:29 -05:00
Mark Qvist 43fc2a6c92 Updated changelog 2024-05-05 20:05:30 +02:00
Mark Qvist b17175dfef Updated changelog 2024-05-05 19:57:48 +02:00
Mark Qvist 1103784997 Updated documentation 2024-05-05 19:56:33 +02:00
Mark Qvist d2feb8b136 Improved path response logic 2024-05-04 21:57:03 +02:00
Mark Qvist f595648a9b Updated tests 2024-05-04 20:27:27 +02:00
Mark Qvist b06f5285c5 Fix LR proof delivery on unknown hop count paths 2024-05-04 20:27:04 +02:00
Mark Qvist 8330f70a27 Fixed link packet routing in topologies where transport packets leak to non-intended instances in the link chain 2024-05-04 19:52:02 +02:00
Mark Qvist 15e10b9435 Added expected hops property to link class 2024-05-04 19:15:57 +02:00
Mark Qvist b91c852330 Updated path request timing 2024-05-04 16:19:04 +02:00
Mark Qvist 75acdf5902 Updated version 2024-05-03 23:49:39 +02:00
Mark Qvist dae40f2684 Removed T3S3 build from autoinstaller 2024-05-03 18:20:17 +02:00
Mark Qvist 4edacf82f3 Merge branch 'master' of github.com:markqvist/Reticulum 2024-05-03 16:22:37 +02:00
markqvist 4b0a0668a5
Update Contributing.md 2024-05-01 17:50:15 +02:00
markqvist a52af17123
Merge pull request #495 from jschulthess/master
optionally load identity file from file in Echo and Link examples
2024-05-01 17:28:10 +02:00
Mark Qvist 0b0a3313c5 Multicast address type modifications 2024-05-01 15:49:48 +02:00
markqvist 34af2e7af7
Merge pull request #476 from thiaguetz/feat/multicast-address-type
feat: implement multicast address type definition on AutoInterface configuration
2024-05-01 15:44:03 +02:00
Jürg Schulthess 12bf7977d2 fix comment 2024-04-29 08:25:40 +02:00
Jürg Schulthess b69b939d6f realign with upstream 2024-04-29 08:10:48 +02:00
Jürg Schulthess b5556f664b realign with upstream 2024-04-29 08:07:22 +02:00
Jürg Schulthess f804ba0263 explicit exit not needed 2024-04-29 08:04:04 +02:00
Jürg Schulthess 84a1ab0ca3 add option to load identity from file 2024-04-29 07:59:55 +02:00
markqvist 465695b9ae
Merge pull request #490 from nothingbutlucas/master
docs: Fix a typo. startig / starting
2024-04-22 01:33:10 +02:00
Mark Qvist a999a4a250 Added support for T3S3 boards to rnodeconf autoinstaller 2024-04-22 01:26:35 +02:00
nothingbutlucas cbb5d99280
docs: Fix a typo. startig / starting
Signed-off-by: nothingbutlucas <69118979+nothingbutlucas@users.noreply.github.com>
2024-04-21 16:11:03 -03:00
Mark Qvist 64f5192c79 Changed rnodeconf autoinstaller menu order 2024-04-20 22:25:57 +02:00
Mark Qvist d223ebc8c0 Added rnodeconf autoinstaller support for Heltec LoRa32 V3 boards 2024-04-20 22:03:14 +02:00
markqvist c28f413fe6
Merge pull request #486 from cobraPA/upstream_add_heltec_v3
Add product and model, plus support for Heltec V3 serial only setup to rnodeconf.
2024-04-20 18:54:09 +02:00
Kevin Brosius 92e5f65887 Add product and model, plus support for Heltec V3 serial only setup
to rnodeconf.
2024-04-11 01:41:50 -04:00
Mark Qvist b977f33df6 Display error on unknown model capabilities instead of fail 2024-03-28 12:05:30 +01:00
Mark Qvist 589fcb8201 Added custom EEPROM bootstrap to rnodeconf 2024-03-28 00:04:48 +01:00
Mark Qvist e5427d70ac Added custom EEPROM bootstrap to rnodeconf 2024-03-27 21:48:32 +01:00
Mark Qvist 2f5381b307 Added TCXO model code comment 2024-03-24 11:51:44 +01:00
Thiaguetz 11baace08d feat: implement multicast address type definition on AutoInterface configuration 2024-03-23 00:54:56 -03:00
Mark Qvist a4d5b5cb17 Merge branch 'master' of github.com:markqvist/Reticulum 2024-03-19 11:52:58 +01:00
Mark Qvist 9cb181690e Added link getter to resource advertisement class 2024-03-19 11:52:32 +01:00
markqvist ff6604290e
Update LICENSE 2024-03-10 22:14:29 +01:00
markqvist 2dbd3cbc0f
Update Contributing.md 2024-03-10 22:14:03 +01:00
markqvist 2a11097cac
Update Contributing.md 2024-03-10 22:13:33 +01:00
markqvist c0e3181ae3
Update Contributing.md 2024-03-10 22:11:44 +01:00
markqvist 5a0316ae7f
Update Contributing.md 2024-03-10 20:39:49 +01:00
Mark Qvist 177bb62610 Updated changelog 2024-03-09 21:09:06 +01:00
Mark Qvist 7cd3cde398 Updated changelog 2024-03-09 21:08:17 +01:00
Mark Qvist 29bdcea616 Updated manual 2024-03-09 21:05:59 +01:00
Mark Qvist d9460c43ad Updated version 2024-03-09 21:01:12 +01:00
markqvist fb02e980db
Merge pull request #461 from attermann/firmware_repos
Support for alternate download URL for custom firmware images
2024-03-08 01:10:34 +01:00
Mark Qvist 4947463440 Updated roadmap 2024-03-06 12:14:36 +01:00
Chad Attermann 5565349255 Fixed installation of alternate firmware version
Required version info was not being downloaded when alternate (not latest)
version is selected rsulting in the error "Could not read locally cached
release information."
2024-03-05 19:02:47 -07:00
Chad Attermann 1b7b131adc Added support for alternate firmware download URL
New command line option `--fw-url` accepts an alternate URL to use for
downloading firmware images.
Note this feature is moderately opinionated when it comes to directory
structure. The intent is to be compatible with GitHub releases, so the
latest version info is expected to be found at
"{fw-url}latest/download/release.json" and firmware images at
"{fw-url}download/{version}/{firmware_file.zip}".
2024-03-05 17:14:52 -07:00
Mark Qvist ace0d997d4 Updated changelog 2024-03-02 00:40:44 +01:00
Mark Qvist 798c252284 Updated manual 2024-03-02 00:40:35 +01:00
Mark Qvist 7da22c8580 Updated documentation build 2024-03-01 00:47:12 +01:00
Mark Qvist eefbb89cde Updated version 2024-03-01 00:05:40 +01:00
Mark Qvist 18f50ff1ae Limit amount of random blobs kept in memory and persisted to disk. Add check for non-existent announce in processing table. 2024-03-01 00:03:56 +01:00
Mark Qvist 05e97ac0db Fixed saving known destination when on-disk storage file has become corrupted 2024-02-29 23:23:41 +01:00
Mark Qvist c2c3a144d2 Added payload data inactivity metric to Link API 2024-02-29 23:05:16 +01:00
markqvist ea369015ee
Update issue templates 2024-02-29 17:07:53 +01:00
markqvist 9745842862
Update issue templates 2024-02-29 17:05:46 +01:00
markqvist 246289c52d
Create config.yml 2024-02-29 17:04:17 +01:00
markqvist ff71cb2f98
Update issue templates 2024-02-29 16:58:57 +01:00
Mark Qvist 5ca1ef1777 Revert EEPROM check logic 2024-02-29 16:18:39 +01:00
Mark Qvist 2b764b4af8 Allow EEPROM checksum mismatch on autoinstall. Fixes #432. 2024-02-29 15:50:45 +01:00
Mark Qvist a62843cd75 Updated readme 2024-02-16 17:54:31 +01:00
Mark Qvist 633435390d Added ability to flash T3 boards with TCXO 2024-02-16 17:32:01 +01:00
Mark Qvist 1e207ef972 Updated readme 2024-02-16 17:31:42 +01:00
Mark Qvist 35e9a0b38a Updated changelog 2024-02-14 16:58:51 +01:00
Mark Qvist 3d7f3825fb Updated manual 2024-02-14 16:54:29 +01:00
Mark Qvist 04b67a545d Updated version 2024-02-13 19:01:07 +01:00
Mark Qvist 61c2fbd0da Merge branch 'master' of github.com:markqvist/Reticulum 2024-02-13 19:00:00 +01:00
Mark Qvist 1aba4ec43a Added support for SX126x-based RNodes 2024-02-13 18:59:23 +01:00
markqvist 841a3daa26
Merge pull request #439 from jacobeva/master
Update min and max values to support SX1280
2024-02-09 22:30:32 +01:00
jacob.eva d98f03f245
Update min and max values to support SX1280 2024-02-09 21:17:58 +00:00
Mark Qvist 878e67f69d Fixed invalid RSSI offset reference. Fixes #433. 2024-01-18 23:01:54 +01:00
Mark Qvist e582a6d6d1 Updated changelog 2024-01-17 22:59:02 +01:00
Mark Qvist a948afb816 Updated manual 2024-01-17 22:56:24 +01:00
Mark Qvist 86a294388f Merge branch 'master' of github.com:markqvist/Reticulum 2024-01-17 22:52:48 +01:00
Mark Qvist 429a0b1bd3 Updated changelog 2024-01-17 22:52:01 +01:00
Mark Qvist ee8bb42633 Updated manual 2024-01-17 22:51:16 +01:00
Mark Qvist c659388a2c Updated manual 2024-01-17 22:36:17 +01:00
markqvist eaa8199988
Merge pull request #428 from jacobeva/master
Add NRF52 support
2024-01-17 01:33:07 +01:00
jacob.eva 4f890e7e8a
Added NRF52 support 2024-01-16 21:30:31 +00:00
Mark Qvist a37e039424 Check input_file attribut 2024-01-14 18:57:23 +01:00
Mark Qvist 8e1e2a9c54 Added debug function 2024-01-14 18:56:20 +01:00
Mark Qvist e4f94c9d0b Updated docs 2024-01-14 18:55:44 +01:00
Mark Qvist b007530123 Adjusted resource timeout calculation 2024-01-14 01:06:43 +01:00
Mark Qvist 4066bba303 Merge branch 'master' of github.com:markqvist/Reticulum 2024-01-14 00:48:14 +01:00
Mark Qvist 8951517d01 Updated version 2024-01-14 00:47:45 +01:00
Mark Qvist ae1d962b9b Fixed large resource transfers failing under some conditions 2024-01-14 00:46:55 +01:00
Mark Qvist a2caa47334 Improved link tests 2024-01-14 00:12:30 +01:00
Mark Qvist 9f43da9105 Fixed rnprobe formatting issue 2024-01-13 16:37:48 +01:00
Mark Qvist 038c696db9 Fixed missing check on malformed advertisement packets 2024-01-13 16:36:11 +01:00
Mark Qvist 8fa6ec144c Updated readme 2024-01-03 12:05:30 +01:00
Mark Qvist a8ccff7c55 Updated contribution guidelines 2024-01-03 12:00:10 +01:00
markqvist a5783da407
Merge pull request #416 from jooray/patch-2
Fix typo
2023-12-31 12:24:48 +01:00
Juraj Bednar bec3cee425
Fix typo 2023-12-30 23:47:51 +01:00
Mark Qvist b15bd19de5 Added funding info 2023-12-30 22:00:46 +01:00
Mark Qvist 38390fd021 Updated license 2023-12-30 21:57:40 +01:00
Mark Qvist 40e0eee64f Updated license 2023-12-30 21:55:20 +01:00
Mark Qvist af4cbb1baf Added funding info 2023-12-30 21:53:50 +01:00
Mark Qvist d3f4192fe3 Added funding info 2023-12-30 21:52:41 +01:00
Mark Qvist 47ef62ac11 Updated contribution guidelines 2023-12-30 21:43:35 +01:00
Mark Qvist d15ddc7a49 Updated contribution guidelines 2023-12-30 17:34:51 +01:00
Mark Qvist d67c8eb1cd Fixed potential division by zero 2023-12-25 11:39:24 +01:00
Mark Qvist f4de5d5199 Updated changelog 2023-12-07 15:52:20 +01:00
Mark Qvist 34e42988ea Updated docs 2023-12-07 15:51:22 +01:00
Mark Qvist 81d5d41149 Updated changelog 2023-12-07 15:51:15 +01:00
Mark Qvist 6b3f3a37f0 Updated version 2023-12-06 00:07:06 +01:00
Mark Qvist 60a604f635 Carrier change flag on listener replace 2023-12-06 00:06:45 +01:00
Mark Qvist 55a2daf379 Updated docs 2023-12-02 02:14:49 +01:00
Mark Qvist 2dbde13321 Added identity import and export in hex, base32 and base64 formats 2023-12-02 02:10:22 +01:00
Mark Qvist 6620dcde6b Updated docs 2023-11-14 10:06:28 +01:00
Mark Qvist 60966d5bb1 Updated changelog 2023-11-14 10:06:19 +01:00
Mark Qvist ea22a53bf2 Updated docs 2023-11-13 23:38:46 +01:00
Mark Qvist 7b9526b4ed Updated version 2023-11-13 23:23:40 +01:00
Mark Qvist 676074187a Added timeout and wait options to rnprobe and improved output formatting 2023-11-13 23:22:58 +01:00
Mark Qvist 5dd2c31caf Generate receipts prior to raw transmit 2023-11-13 23:12:59 +01:00
Mark Qvist 2db400a1a0 Updated changelog 2023-11-13 23:11:29 +01:00
Mark Qvist b68dbaf15e Updated log levels 2023-11-08 15:23:29 +01:00
Mark Qvist 84febcdf95 Updated changelog 2023-11-06 11:28:22 +01:00
Mark Qvist c972ef90c8 Updated manual 2023-11-06 11:21:32 +01:00
Mark Qvist 19a74e3130 Updated changelog 2023-11-06 11:21:23 +01:00
Mark Qvist 5ba789f782 Updated single-packet timing 2023-11-06 11:10:38 +01:00
Mark Qvist 58b5501e17 Cleanup 2023-11-06 11:08:31 +01:00
Mark Qvist b584832b8f Fixed logging error messages when a local client connects while instance is starting up 2023-11-06 11:06:14 +01:00
Mark Qvist fc0cf17c4d Updated docs 2023-11-05 23:37:45 +01:00
Mark Qvist 001dd369ec Updated version 2023-11-05 23:37:38 +01:00
Mark Qvist 9ce2ea4a5c Updated link test 2023-11-05 23:36:19 +01:00
Mark Qvist eec8814c22 Updated version 2023-11-05 23:29:06 +01:00
Mark Qvist 7a6ed68482 Set socket options 2023-11-05 22:57:03 +01:00
Mark Qvist cd9e23f2de Updated manual 2023-11-04 23:19:08 +01:00
Mark Qvist ffa84de0bc Updated changelog 2023-11-04 23:18:59 +01:00
Mark Qvist 89d3cdba17 Updated docs 2023-11-04 18:13:26 +01:00
Mark Qvist 2ba5843f22 Updated version 2023-11-04 18:05:42 +01:00
Mark Qvist c4d0f08767 Improved resource transfers over unreliable links 2023-11-04 18:05:20 +01:00
Mark Qvist db1cdec2a2 Fixed premature request timeout 2023-11-04 17:59:27 +01:00
Mark Qvist 1eea1a6a22 Updated example 2023-11-04 17:56:20 +01:00
Mark Qvist 4a69ce5a98 Updated changelog 2023-11-02 21:44:48 +01:00
Mark Qvist 8d653cba9b Updated docs 2023-11-02 21:39:57 +01:00
Mark Qvist a6126a6bc5 Updated version 2023-11-02 21:37:16 +01:00
Mark Qvist 957c2b3bc1 Fixed invalid reference 2023-11-02 21:33:21 +01:00
Mark Qvist 494bde4e79 Updated docs 2023-11-02 18:53:22 +01:00
Mark Qvist 5e39136dff Fixed missing path state resetting on stale path rediscovery 2023-11-02 16:15:42 +01:00
Mark Qvist 4b26a86a73 Added probe count option to rnprobe 2023-11-02 16:14:38 +01:00
Mark Qvist 43a6e280c0 Fixed bluetooth read timeouts on Android in environments with hight 2.4G noise 2023-11-02 16:08:49 +01:00
Mark Qvist 237a45b2ca Don't send rediscovery requests on local originator 2023-11-02 13:33:12 +01:00
Mark Qvist b161650ced Adjusted link timings 2023-11-02 13:04:09 +01:00
Mark Qvist 24975eac31 Updated version 2023-11-02 13:03:53 +01:00
Mark Qvist 5d1ff36565 Updated docs 2023-11-02 13:03:39 +01:00
Mark Qvist 628777900e Fixed attribute 2023-11-02 12:44:57 +01:00
Mark Qvist 12e87425dc Adjusted timings 2023-11-02 12:24:42 +01:00
Mark Qvist 873f049e20 Fixed redundant rediscovery path request 2023-11-02 04:35:57 +01:00
Mark Qvist 2ea963ed03 Fixed missing timeout calculation 2023-11-02 04:35:10 +01:00
Mark Qvist 1d1276d6dd Updated changelog 2023-10-31 12:24:59 +01:00
Mark Qvist 83741724b0 Updated documentation 2023-10-31 12:24:18 +01:00
Mark Qvist a4143cfe6d Improved link error handling. Fixes #387. 2023-10-31 11:44:12 +01:00
Mark Qvist 3d645ae2f4 Updated documentation 2023-10-31 11:09:54 +01:00
Mark Qvist 5ba125c801 Updated documentation 2023-10-31 10:53:43 +01:00
Mark Qvist badb392898 Updated manual 2023-10-28 00:40:07 +02:00
Mark Qvist c0e1ce8d86 Updated documentation and manual 2023-10-28 00:28:41 +02:00
markqvist 0bc248c5e4
Merge pull request #385 from jschulthess/master
Add user systemd service to manual
2023-10-28 00:23:10 +02:00
Mark Qvist 798dfb1727 Added ability to query physical layer stats on links 2023-10-28 00:05:35 +02:00
Mark Qvist a451b987aa Updated documentation 2023-10-28 00:03:53 +02:00
Mark Qvist f01074e5b8 Implemented link establishment on ultra low bandwidth links 2023-10-27 18:16:52 +02:00
Mark Qvist 0e12442a28 Local interface bitrate simulation 2023-10-27 18:12:53 +02:00
Jürg Schulthess a4e8489a34 fix code text syntax 2023-10-25 14:09:24 +02:00
Jürg Schulthess 276b6fbd22 fix indentation 2023-10-25 14:07:34 +02:00
Jürg Schulthess 52ab08c289 add user systemd service 2023-10-25 13:31:37 +02:00
Mark Qvist 38236366cf Improved pretty print output 2023-10-24 13:24:40 +02:00
Mark Qvist af3cc3c5dd Updated version 2023-10-24 01:45:07 +02:00
Mark Qvist 35ed1f950c Updated version 2023-10-24 01:43:50 +02:00
Mark Qvist c050ef945e Updated pretty-print functions 2023-10-24 01:41:49 +02:00
Mark Qvist bed71fa3f8 Added physical layer link stats to link and packet classes 2023-10-24 01:41:12 +02:00
Mark Qvist cf125daf5c Added link quality calculation to RNode interface 2023-10-24 01:40:17 +02:00
Mark Qvist 9f425c2e8d Updated exceptions 2023-10-24 01:39:25 +02:00
Mark Qvist 0dc78241ac Updated version 2023-10-19 01:39:47 +02:00
Mark Qvist 01e963e891 Updated manual 2023-10-19 01:39:39 +02:00
Mark Qvist b3731524ac Improved path re-discovery in changing topographies 2023-10-19 00:38:41 +02:00
Mark Qvist 67c7395ea7 Improved shared interface reconnection on service restart 2023-10-18 23:18:59 +02:00
Mark Qvist fddf36a920 Updated manual 2023-10-16 19:33:13 +02:00
Mark Qvist 4f561a8c0c Added exception handling to interface detach 2023-10-16 18:54:36 +02:00
Mark Qvist 778d6105c1 Updated readme 2023-10-10 00:32:15 +02:00
Mark Qvist 60c94dc9b6 Updated readme 2023-10-10 00:29:40 +02:00
Mark Qvist f71395e449 Updated readme 2023-10-10 00:26:28 +02:00
Mark Qvist 1abacca9bf Fixed missing command definition 2023-10-08 18:02:38 +02:00
Mark Qvist 40281d5403 Updated changelog 2023-10-07 16:42:10 +02:00
Mark Qvist e0da489156 Updated manual 2023-10-07 16:33:54 +02:00
Mark Qvist 2dcf1350e7 Updated changelog 2023-10-07 16:33:45 +02:00
Mark Qvist 1e280611ce Updated documentation and manuals 2023-10-07 13:02:42 +02:00
Mark Qvist f1d107846f Updated version 2023-10-07 13:00:16 +02:00
Mark Qvist cc951dcb53 Added RPC key configuration option to manual 2023-10-07 12:40:30 +02:00
Mark Qvist b5856a3706 Added configuration option to specify shared instance RPC key 2023-10-07 12:34:10 +02:00
Mark Qvist ed3479da9a Reordered airtime stats 2023-10-04 23:46:35 +02:00
Mark Qvist 5e15f421b7 Updated manual 2023-10-02 18:01:28 +02:00
Mark Qvist 0a9366ba6e Updated Android log level on bluetooth failure 2023-10-02 17:39:19 +02:00
Mark Qvist cf31435f39 Updated docs 2023-10-02 17:36:52 +02:00
Mark Qvist 9f58860842 Added missing super init on Android interfaces 2023-10-02 17:36:33 +02:00
Mark Qvist 875348383d Updated roadmap 2023-10-01 23:46:01 +02:00
Mark Qvist f79f190525 Changed ir utility name to rnir. Closes #377. 2023-10-01 23:39:43 +02:00
Mark Qvist 5e27a81412 Updated changelog 2023-10-01 12:41:45 +02:00
Mark Qvist 0dcb009579 Updated docs and manual 2023-10-01 12:34:50 +02:00
Mark Qvist 943f76804b Updated utility documentation 2023-10-01 12:34:29 +02:00
Mark Qvist 8bbe6ae3ae Updated docs and manual 2023-10-01 12:09:49 +02:00
Mark Qvist f0d85dd078 Merge branch 'master' of github.com:markqvist/Reticulum 2023-10-01 11:46:57 +02:00
Mark Qvist f85dda1829 Fixed typos in examples 2023-10-01 11:46:30 +02:00
markqvist 91e064cdf1
Merge pull request #375 from connervieira/patch-1
Fixed some typos
2023-10-01 11:46:25 +02:00
Mark Qvist fb4e53f6e3 Configured announce ingress limit defaults 2023-10-01 11:39:24 +02:00
Mark Qvist 03340ed091 Added ability to drop all paths via a specific transport instance to rnpath 2023-10-01 11:39:07 +02:00
Mark Qvist ed424fa0a2 Updated documentation 2023-10-01 09:51:27 +02:00
Mark Qvist 406ab216d1 Updated documentation 2023-10-01 09:24:25 +02:00
Mark Qvist 00d8a2064d Fixed typos 2023-10-01 09:24:17 +02:00
Mark Qvist 38b920e393 Updated docs and manual 2023-10-01 01:59:22 +02:00
Mark Qvist 1ed000c4d9 Updated manual 2023-10-01 01:35:17 +02:00
Mark Qvist d360958d10 Updated documentation 2023-10-01 01:35:00 +02:00
Mark Qvist fcdb455d73 Added sort mode to rnstatus 2023-10-01 01:08:19 +02:00
Mark Qvist 575639b721 Updated documentation 2023-10-01 01:08:08 +02:00
Mark Qvist 492573f9fe Added ingress control interface configuraion options 2023-10-01 00:43:26 +02:00
Mark Qvist c5d30f8ee6 Cleanup 2023-10-01 00:24:03 +02:00
Mark Qvist 3c4791a622 Implemented announce ingress control 2023-10-01 00:16:32 +02:00
Mark Qvist 803a5736c9 Added held announce stats to rnstatus 2023-10-01 00:12:49 +02:00
Mark Qvist 267ffbdf5f Updated version 2023-09-30 22:37:43 +02:00
Mark Qvist 52028aa44c Added ingress control config option 2023-09-30 21:07:22 +02:00
Mark Qvist c5248d53d6 Fixed frequency pretty print function 2023-09-30 19:22:39 +02:00
Mark Qvist 2d2f0947ac Fixed frequency pretty print function 2023-09-30 19:18:30 +02:00
Mark Qvist 4fa616a326 Added interface sorting and announce rate display to rnstatus 2023-09-30 19:14:39 +02:00
Mark Qvist 136713eec1 Added announce frequency stats 2023-09-30 19:13:58 +02:00
Mark Qvist 0fd75cb819 Added announce frequency sampling to interfaces 2023-09-30 19:11:10 +02:00
Mark Qvist ea52153969 Added convenience function for printing frequencies 2023-09-30 19:09:26 +02:00
Mark Qvist 3854781028 Updated manual 2023-09-30 19:08:57 +02:00
Conner Vieira ec2805f357
Fixed some typos 2023-09-29 20:54:48 -04:00
Mark Qvist b5cb3a65dd Fixed announce queue not clearing all announces with exceeded retry limit at the same time 2023-09-30 00:25:47 +02:00
Mark Qvist c79cb3aa20 Resolver skeleton 2023-09-29 23:18:30 +02:00
Mark Qvist 8bff119691 Added Identity Resolver skeleton 2023-09-29 12:44:03 +02:00
Mark Qvist 5e0b2c5b42 Allow rnid aspect lengths of 1 2023-09-29 12:29:37 +02:00
Mark Qvist 8908022b88 Updated license headers 2023-09-29 10:31:20 +02:00
Mark Qvist b0dda0ed86 Added Resolver class 2023-09-29 10:31:00 +02:00
Mark Qvist 6ae72d4225 Updated exit codes 2023-09-29 10:30:19 +02:00
Mark Qvist 0a188a2d39 Fixed output formatting in rncp 2023-09-25 15:29:41 +02:00
Mark Qvist 036abb28fe Added timeout option to rnprobe 2023-09-25 15:27:24 +02:00
Mark Qvist a732767a28 Fixed local RSSI and SNR cache pop order 2023-09-25 14:17:58 +02:00
Mark Qvist 32a1261d98 Updated manual 2023-09-22 12:01:17 +02:00
Mark Qvist 27c5af3bbc Updated manual 2023-09-22 10:07:10 +02:00
Mark Qvist 5872108da3 Added timeout to rnprobe 2023-09-22 10:04:37 +02:00
Mark Qvist 8f6c6b76de Updated changelog 2023-09-21 21:24:26 +02:00
Mark Qvist 99db625c62 Updated manual 2023-09-21 21:23:28 +02:00
Mark Qvist fdf6a31cbd Updated changelog 2023-09-21 21:23:19 +02:00
Mark Qvist 75f353d7e2 Updated documentation 2023-09-21 19:12:34 +02:00
Mark Qvist 82f204fb44 Added ability to enable a built-in probe responder destination for Transport Instances 2023-09-21 18:48:08 +02:00
Mark Qvist 8d4492ecfd Updated documentation 2023-09-21 18:47:40 +02:00
Mark Qvist f8a53458d6 Added respond_to_probes option to example config 2023-09-21 18:33:14 +02:00
Mark Qvist 4229837170 Updated documentation 2023-09-21 18:32:46 +02:00
Mark Qvist 4be2ae6c70 Fixed verbose output bug in rnprobe 2023-09-21 18:32:36 +02:00
Mark Qvist dbdeba2fe0 Updated rnprobe utility 2023-09-21 17:49:14 +02:00
Mark Qvist 7e34b61f37 Added link status check on identify 2023-09-21 14:12:32 +02:00
Mark Qvist bf726ed2c7 Fixed missing timeout check in rncp 2023-09-21 14:12:14 +02:00
Mark Qvist fa54a2affe Updated documentation 2023-09-21 13:51:03 +02:00
Mark Qvist 62e1d0e554 Updated version 2023-09-21 13:46:51 +02:00
Mark Qvist 9c823a038b Impproved path re-discovery on Transport Instances when local nodes roam to other network segments 2023-09-21 13:46:28 +02:00
Mark Qvist 1e6cd50f46 Updated rnstatus output 2023-09-21 12:07:11 +02:00
Mark Qvist 06716e4873 Disabled caching until redesign 2023-09-21 12:05:37 +02:00
Mark Qvist 8e4a1e3ffa Increased AutoInterface peering timeout on Android 2023-09-20 00:53:51 +02:00
Mark Qvist 0abb3bd4c3 Update changelog 2023-09-19 18:46:28 +02:00
Mark Qvist 336574daed Updated manual 2023-09-19 18:46:23 +02:00
Mark Qvist 07938ba111 Added ability to set custom RNode display address to rnodeconf 2023-09-19 18:33:37 +02:00
Mark Qvist e699eb6d25 Updated changelog 2023-09-19 11:27:06 +02:00
Mark Qvist 3864549752 Updated changelog 2023-09-19 11:22:58 +02:00
Mark Qvist 0b934cd0f6 Updated manual 2023-09-19 11:13:30 +02:00
Mark Qvist 5bac38a752 Updated rncp output 2023-09-19 10:14:02 +02:00
Mark Qvist 72c8d4d3dd Updated docs 2023-09-19 10:13:45 +02:00
Mark Qvist b8c6ea015e Fixed missing attribute check 2023-09-19 10:13:27 +02:00
Mark Qvist ffe1beb7ae Updated log statement 2023-09-19 10:13:04 +02:00
Mark Qvist 21c6dbfce0 Added check for destination direction on annonuce 2023-09-19 10:11:45 +02:00
Mark Qvist 70cbb8dc79 Updated utilities section of docs 2023-09-18 23:16:57 +02:00
Mark Qvist 334f2a364d Added fetch mode to rncp 2023-09-18 22:40:29 +02:00
Mark Qvist b477354235 Added fetch mode to rncp 2023-09-18 22:22:44 +02:00
Mark Qvist 254c966159 Fixed potential None reference 2023-09-18 20:52:36 +02:00
Mark Qvist 7ee9b07d9c Added silent mode to rncp 2023-09-18 16:36:58 +02:00
Mark Qvist 839b72469c Added allowed_identities file support to rncp 2023-09-18 16:12:45 +02:00
Mark Qvist 874d76b343 Added Transport Instance uptime to rnstatut output 2023-09-18 15:45:55 +02:00
Mark Qvist 7497e7aa0c Updated readme 2023-09-18 13:04:38 +02:00
Mark Qvist efa084fb0f Updated readme 2023-09-18 13:04:24 +02:00
Mark Qvist 48e4a27054 Updated manual 2023-09-18 13:02:41 +02:00
Mark Qvist 96cf6a790e Updated documentation 2023-09-18 13:02:18 +02:00
Mark Qvist d7b54ff397 Updated readme 2023-09-18 13:02:08 +02:00
Mark Qvist 90ab065073 Updated manual 2023-09-18 12:36:08 +02:00
Mark Qvist b6f0784311 Added rnid utility to manual. Updated communications hardware section. 2023-09-18 12:35:54 +02:00
Mark Qvist e37ec654ee Fixed rnid output bug 2023-09-18 12:07:30 +02:00
Mark Qvist b237d51276 Cleanup 2023-09-18 11:00:36 +02:00
Mark Qvist 155ea24008 Added channel CSMA parameter stats to RNode Interface 2023-09-18 00:45:38 +02:00
Mark Qvist 8c8affc800 Improved Channel sequencing, retries and transfer efficiency 2023-09-18 00:42:54 +02:00
Mark Qvist 481062fca1 Added adaptive compression to Buffer class 2023-09-18 00:39:27 +02:00
Mark Qvist ffcc5560dc Updated version 2023-09-18 00:34:15 +02:00
Mark Qvist 09e146ef0b Updated channel tests 2023-09-18 00:34:02 +02:00
Mark Qvist 4c6b04ff69 Fixed invalid path for firmware hash generation while using extracted firmware to autoinstall 2023-09-15 13:49:15 +02:00
Mark Qvist 9889b479d1 Fixed inadverdent AutoInterface multi-IF deque hit for resource transfer retries 2023-09-14 22:14:31 +02:00
Mark Qvist 95dec00c76 Updated roadmap 2023-09-14 00:34:45 +02:00
Mark Qvist cff268926d Updated changelog 2023-09-14 00:22:02 +02:00
Mark Qvist 6fa88f4e4a Updated manual 2023-09-14 00:21:23 +02:00
Mark Qvist ab8e6791fe Updated changelog 2023-09-14 00:21:08 +02:00
Mark Qvist 13c45cc59a Added channel stat reporting and airtime controls to RNode interface 2023-09-13 21:15:32 +02:00
Mark Qvist 67c468884f Added channel load and airtime stats to rnstatus output 2023-09-13 20:07:53 +02:00
Mark Qvist f028d44609 Added airtime config info to docs 2023-09-13 20:07:31 +02:00
Mark Qvist 18b952e612 Added airtime config options, improved periodic data persist 2023-09-13 20:07:07 +02:00
Mark Qvist 25178d8f50 Updated docs 2023-09-13 13:37:37 +02:00
Mark Qvist 1c0b7c00fd Updated version 2023-09-13 13:24:50 +02:00
Mark Qvist 2439761529 Prevent answering path requests on roaming-mode interfaces for next-hop instances on same roaming-mode interface 2023-09-13 13:03:22 +02:00
Mark Qvist 8803dd5b65 Catch error when undefined next-hop path data is returned 2023-09-13 13:02:05 +02:00
Mark Qvist d15d04eae5 Updated debug logging 2023-09-13 13:01:14 +02:00
Mark Qvist bf40f74a4a Updated documentation build 2023-09-05 12:08:59 +02:00
Mark Qvist c0339c0f46 Updated testnet info 2023-08-30 02:15:34 +02:00
Mark Qvist b64bb166c0 Updated testnet info 2023-08-30 01:50:12 +02:00
Mark Qvist 31d30030dc Updated readme 2023-08-29 18:50:05 +02:00
Mark Qvist 556e111a98 Updated manual 2023-08-15 17:09:20 +02:00
Mark Qvist 70b0dd621b Updated install section 2023-08-15 11:27:22 +02:00
Mark Qvist f7d3212651 Updated install section 2023-08-15 11:00:59 +02:00
Mark Qvist 0a29f0cfa1 Updated changelog 2023-08-15 10:38:29 +02:00
Mark Qvist 97153ad59d Updated explanation text 2023-08-15 10:30:49 +02:00
Mark Qvist bc8378fb60 Merge branch 'master' of github.com:markqvist/Reticulum 2023-08-15 10:27:15 +02:00
markqvist 3320cf8da8
Merge pull request #363 from blackjack75/master
Added suggestion to use lower baudrate if flashing fails on ESP32
2023-08-15 10:26:57 +02:00
markqvist bb53bd3f27
Merge pull request #362 from Erethon/eeprom-dump-dir
rnodeconf: Dump eeprom under specific directory
2023-08-15 10:25:17 +02:00
Mark Qvist 73eed59fab Updated docs 2023-08-15 10:23:51 +02:00
Santiago Lema 91ede52634 Added suggestion to use lower baudrate if flashing fails on ESP32 2023-08-14 20:47:40 +02:00
Dionysis Grigoropoulos 93f13a98b2
rnodeconf: Dump eeprom under specific directory 2023-08-14 20:08:40 +03:00
Mark Qvist c87c5c9709 Updated docs 2023-08-14 16:46:00 +02:00
markqvist b0c6c53430
Merge pull request #360 from Erethon/set-baud-rate-when-flashing
rnodeconf: Add option to set baud when flashing
2023-08-14 16:42:26 +02:00
Mark Qvist 94a5222390 Updated version 2023-08-13 20:38:41 +02:00
Dionysis Grigoropoulos 98bb304060
rnodeconf: Add option to set baud when flashing 2023-08-12 02:37:05 +03:00
Mark Qvist 08bfd923ea Fixed possible invalid comparison in link watchdog job 2023-08-05 15:10:00 +02:00
Mark Qvist ae28f04ce4 Added bytes input to destination hash convenience functions 2023-07-10 00:54:02 +02:00
Mark Qvist 024a742f2a Updated changelog 2023-07-09 16:51:54 +02:00
Mark Qvist df184f3e54 Updated docs 2023-07-09 16:48:45 +02:00
Mark Qvist 5542410afa Updated version 2023-07-09 16:45:52 +02:00
Mark Qvist 99205cdc0f Fixed typo in rnid 2023-07-09 16:29:40 +02:00
Mark Qvist 8c936af963 Merge branch 'master' of github.com:markqvist/Reticulum 2023-06-29 22:12:30 +02:00
Mark Qvist 7fe751e74f Updated documentation 2023-06-29 16:52:06 +02:00
markqvist 6d551578c3
Merge pull request #325 from npetrangelo/patch-3
Update __init__.py
2023-06-22 20:05:37 +02:00
markqvist 40c85fb607
Merge pull request #330 from Erethon/rnodeconf-device-selection
Fix bug in device selection of rnodeconf
2023-06-22 20:00:42 +02:00
Dionysis Grigoropoulos 743736b376
Fix bug in device selection of rnodeconf 2023-06-21 00:02:11 +03:00
Mark Qvist 7fdb431d70 Updated changelog 2023-06-13 19:27:53 +02:00
Mark Qvist ebcc3d8912 Updated manual 2023-06-13 19:27:07 +02:00
Mark Qvist 32e29a54c3 Updated manual 2023-06-13 19:21:03 +02:00
Mark Qvist 049733c4b6 Fixed race condition for link initiators on timed out link establishment 2023-06-13 19:20:54 +02:00
Mark Qvist 420d58527d Merge branch 'master' of github.com:markqvist/Reticulum 2023-06-13 16:11:28 +02:00
Mark Qvist bab779a34c Fixed race condition for link initiators on timed out link establishment 2023-06-13 16:10:47 +02:00
markqvist 45aa71b2b7
Merge pull request #326 from SebastianObi/master
RNodeInterface - Fixed missing init of 'r_stat_snr'.
2023-06-07 18:40:50 +02:00
SebastianObi 6dcfe2cad6
Fixed missing init of 'r_stat_snr'.
This this will otherwise lead to the error:
AttributeError: 'RNodeInterface' object has no attribute 'r_stat_snr'
2023-06-07 17:43:14 +02:00
SebastianObi f206047908
Fixed missing init of 'r_stat_snr'.
This this will otherwise lead to the error:
AttributeError: 'RNodeInterface' object has no attribute 'r_stat_snr'
2023-06-07 17:42:44 +02:00
Nathan Petrangelo 6ce979a7de
Update __init__.py
Auto convert log messages to strings on the way in
2023-06-05 17:31:52 -04:00
Mark Qvist 97f97eb063 Updated changelog 2023-06-03 16:04:18 +02:00
Mark Qvist f3db762e9f Updated documentation 2023-06-03 16:03:13 +02:00
Mark Qvist f9f623dfa5 Updated version and changelog 2023-06-03 15:52:44 +02:00
Mark Qvist ffa6bec3b4 Updated parser 2023-06-02 21:24:57 +02:00
Mark Qvist 4f78973751 Fixed race condition when timed-out link receives a late establishment proof a few milliseconds after it has timed out 2023-06-02 21:24:49 +02:00
Mark Qvist a8a7af4b74 Handle missing identity file in rncp. Fixes #317. 2023-05-31 15:39:55 +02:00
Mark Qvist 45295c779c Updated changelog 2023-05-19 11:38:46 +02:00
Mark Qvist a82376d1f5 Updated manuals 2023-05-19 11:35:45 +02:00
Mark Qvist 75c6248264 Updated documentation 2023-05-19 11:31:43 +02:00
Mark Qvist 9294ab4f97 Updated version 2023-05-19 11:31:36 +02:00
Mark Qvist f01193e854 Updated documentation 2023-05-19 03:06:24 +02:00
Mark Qvist d7375bc4c3 Fixed callback invocation on channel receive 2023-05-19 01:58:28 +02:00
Mark Qvist 1a860c6ffd Add EOF signal on buffer close 2023-05-19 01:57:20 +02:00
Mark Qvist 800ed3af7a Fixed ready callback invocation 2023-05-18 23:35:28 +02:00
Mark Qvist 9c8e79546c Fixed missing check in receipt culling 2023-05-18 23:33:26 +02:00
Mark Qvist 4c272aa536 Updated buffer tests for windowed channel 2023-05-18 23:32:29 +02:00
Mark Qvist e184861822 Enabled channel tests 2023-05-18 23:31:29 +02:00
Mark Qvist d40e19f08d Updated gitignore 2023-05-18 23:29:31 +02:00
Mark Qvist 817ee0721a Updated manual 2023-05-12 12:38:12 +02:00
Mark Qvist 22ec4afdab Updated changelog 2023-05-12 12:38:02 +02:00
Mark Qvist 61626897e7 Add channel window mode for slow links 2023-05-11 21:28:13 +02:00
Mark Qvist 6fd3edbb8f Updated docs 2023-05-11 20:55:28 +02:00
Mark Qvist fc5b02ed5d Added medium window to channel 2023-05-11 20:23:36 +02:00
Mark Qvist a06e752b76 Added multi-interface duplicate deque to AutoInterface 2023-05-11 19:54:26 +02:00
Mark Qvist 3a947bf81b Updated documentation 2023-05-11 19:53:40 +02:00
Mark Qvist 31121ca885 Updated documentation 2023-05-11 18:49:01 +02:00
Mark Qvist 387b8c46ff Cleanup 2023-05-11 18:35:01 +02:00
Mark Qvist 66fda34b20 Cleanup 2023-05-11 17:48:07 +02:00
Mark Qvist 1542c5f4fe Fixed received link packet proofs not resetting watchdog stale timer 2023-05-11 16:22:44 +02:00
Mark Qvist 523fc7b8f9 Adjusted loglevel 2023-05-11 16:09:25 +02:00
Mark Qvist 73faf04ea1 Tuned channel windowing 2023-05-10 20:01:33 +02:00
Mark Qvist e10ddf9d2d Cleanup 2023-05-10 19:28:28 +02:00
Mark Qvist 641a7ea75d Implemented basic channel windowing 2023-05-10 19:15:45 +02:00
Mark Qvist e543d5c27f Implemented basic channel windowing 2023-05-10 19:15:20 +02:00
Mark Qvist 01c59ab0c6 Cleanup 2023-05-10 18:44:05 +02:00
Mark Qvist a4c64abed4 Initial framework for channel windowing 2023-05-10 18:43:17 +02:00
Mark Qvist 7df11a6f67 Fixed missing isolation of packet delivery callback 2023-05-10 18:40:46 +02:00
Mark Qvist 1bd6020163 Cleanup 2023-05-10 18:40:18 +02:00
Mark Qvist b3ac3131b5 Updated version 2023-05-09 23:07:47 +02:00
Mark Qvist f522cb1db1 Added per-packet compression to buffer 2023-05-09 22:13:57 +02:00
Mark Qvist d96a4853fe Fixed version display 2023-05-09 22:13:23 +02:00
Mark Qvist 52a0447fea Fixed resent packets not getting repacked 2023-05-09 22:12:49 +02:00
Mark Qvist e82e6d56f1 Added ability to trust external signing keys to rnodeconf 2023-05-09 15:31:02 +02:00
Mark Qvist 3967ef453d Updated documentation 2023-05-05 13:47:29 +02:00
Mark Qvist 76f7751d5f Updated documentation 2023-05-05 13:46:23 +02:00
Mark Qvist 8716ffc873 Updated documentation 2023-05-05 13:38:06 +02:00
Mark Qvist b476e4cfb0 Updated changelog 2023-05-05 11:48:00 +02:00
Mark Qvist 7ec77a10d3 Updated changelog 2023-05-05 11:46:06 +02:00
Mark Qvist 55a9c5ef71 Updated documentation 2023-05-05 11:27:52 +02:00
Mark Qvist 6d3ba31993 Updated readme 2023-05-05 11:14:50 +02:00
Mark Qvist d3f4a674aa Updated readme 2023-05-05 11:13:18 +02:00
Mark Qvist 599ab20ed0 Updated readme 2023-05-05 11:09:12 +02:00
Mark Qvist dcf33e125b Cleanup 2023-05-05 10:43:27 +02:00
Mark Qvist 01600b96a4 Fix import paths 2023-05-05 10:37:22 +02:00
Mark Qvist 64bdc4c18c Fix import paths 2023-05-05 10:25:15 +02:00
Mark Qvist 0889b8a7c5 Updated manual 2023-05-05 10:09:17 +02:00
Mark Qvist 1b2fee3ab8 Fixed EPUB output 2023-05-05 09:43:21 +02:00
Mark Qvist da7a4433c0 Updated documentation 2023-05-04 23:30:27 +02:00
Mark Qvist 5e5d89cc92 Removed dependency on netifaces. 2023-05-04 23:19:43 +02:00
Mark Qvist a3bee4baa9 Removed netifaces dependency from AutoInterface 2023-05-04 17:55:58 +02:00
Mark Qvist fab83ec399 Restructured library 2023-05-04 17:55:38 +02:00
Mark Qvist b740e36985 Added ifaddr module 2023-05-04 17:46:56 +02:00
Mark Qvist 29693c6fe2 Updated documentation 2023-05-04 12:42:12 +02:00
Mark Qvist 72638f40a6 Updated documentation 2023-05-04 12:23:25 +02:00
Mark Qvist 8d29e83d90 Updated dependencies 2023-05-04 12:23:16 +02:00
Mark Qvist 53b325d34d Added support for T3 v1.0 to rnodeconf 2023-05-03 15:56:19 +02:00
Mark Qvist d31cf6e297 Added ability to configure RNode display intensity 2023-05-03 14:26:47 +02:00
Mark Qvist e386a5d08b Use native Python unzip for updates 2023-05-03 12:57:38 +02:00
Mark Qvist d467ed9ece Merge branch 'master' of github.com:markqvist/Reticulum 2023-05-03 12:27:10 +02:00
Mark Qvist 892a467d74 Update version 2023-05-03 12:26:48 +02:00
markqvist 4366e71f34
Merge pull request #272 from VioletEternity/windows
Improve Windows compatibility for rnodeconf
2023-05-03 12:26:36 +02:00
Mark Qvist 7e9998b4fd Use included platform detection method 2023-05-03 12:21:57 +02:00
markqvist 79abe93139
Merge pull request #278 from VioletEternity/windows-so_reuseaddr
Use SO_EXCLUSIVEADDRUSE instead of SO_REUSEADDR on Windows
2023-05-03 12:18:49 +02:00
Mark Qvist d69d4b3920 Fixed firmware extraction for unverifiable devices. Fixes #266. 2023-05-02 18:10:04 +02:00
Mark Qvist 3300541181 Fixed invalid error code in conditional. Fixes #284. 2023-05-02 17:45:30 +02:00
Mark Qvist 3848059f19 Only use ifname for link-local discovery scopes. Fixes #283. 2023-05-02 17:39:06 +02:00
Mark Qvist 30021d89cb Fixed header bits in get_packed_flags(). Fixes #275. 2023-05-02 17:33:38 +02:00
Mark Qvist 29019724bd Added verbosity argument to Reticulum instantiation. Fixes #238. 2023-05-02 16:42:04 +02:00
Maya ba7838c04e Use SO_EXCLUSIVEADDRUSE instead of SO_REUSEADDR on Windows.
On Linux, SO_REUSEADDR is used so that a socket in TIME-WAIT state can
be rebound after a listening process is restarted. It does not allow two
processes to listen on the exact same (addr, port) combination. However,
on Windows, it does, and SO_EXCLUSIVEADDRUSE is required to reproduce
the Linux behavior.

Reticulum relies on an error being returned by bind() that reuses
the same (addr, port) combination as another process to detect whether
there is a shared instance already running. Setting SO_EXCLUSIVEADDRUSE
makes this detection process work on Windows as well.
2023-04-19 03:03:15 +01:00
Maya af16c68e47 Make esptool.py invocation compatible with Windows. 2023-04-13 18:17:14 +01:00
Maya bda5717051 Use standard Python zipfile module to decompress firmware 2023-04-13 18:10:21 +01:00
Mark Qvist fac4973329 Fixed potential race condition in announce queue handling for AutoInterface 2023-03-09 18:32:14 +01:00
Mark Qvist 90cfaa4e82 Updated manual 2023-03-08 14:54:04 +01:00
Mark Qvist 443aa575df Updated changelog 2023-03-08 14:53:52 +01:00
Mark Qvist 619771c3a3 Updated changelog 2023-03-08 14:43:35 +01:00
Mark Qvist 18a56cfd52 Updated manual 2023-03-08 14:27:51 +01:00
Mark Qvist 55c39ff27c Updated roadmap 2023-03-08 14:10:56 +01:00
Mark Qvist 159c7a9a52 Fixed rnstatus JSON output error 2023-03-08 14:10:33 +01:00
Mark Qvist af8edc335b Updated roadmap 2023-03-08 12:35:41 +01:00
Mark Qvist 4d3ea37bc3 Updated roadmap and docs 2023-03-08 12:34:09 +01:00
Mark Qvist 226004da94 Ignore lo0 in all cases. Fixes #237. 2023-03-07 16:43:10 +01:00
Mark Qvist 47b358351f Exclude tests from wheel. Fixes #241. 2023-03-07 16:31:31 +01:00
markqvist f5d77a1dfb
Merge pull request #252 from acehoss/bugfix/buffer-missing-segments
Bugfix: buffer missing segments
2023-03-05 17:59:03 +01:00
Aaron Heise 9c9f0a20f9
Handle sequence overflow when checking incoming message 2023-03-04 23:54:07 -06:00
Aaron Heise 6d9d410a70
Address multiple issues with Buffer and Channel
- StreamDataMessage now packed by struct rather than umsgpack for a more predictable size
- Added protected variable on LocalInterface to allow tests to simulate a low bandwidth connection
- Retry timer now has exponential backoff and a more sane starting value
- Link proves packet _before_ sending contents to Channel; this should help prevent spurious retries especially on half-duplex links
- Prevent Transport packet filter from filtering out duplicate packets for Channel; handle duplicates in Channel to ensure the packet is reproven (in case the original proof packet was lost)
- Fix up other tests broken by these changes
2023-03-04 23:37:58 -06:00
Mark Qvist d8f3ad8d3f Temporarily disabled extra-level log statement 2023-03-04 19:30:47 +01:00
Mark Qvist a1b75b9746 Increased per-hop timeout 2023-03-04 19:30:23 +01:00
Mark Qvist 80f3bfaece Adjusted StreamDataMessage overhead calculation 2023-03-04 19:06:47 +01:00
Mark Qvist 37b2d8a6ec Fixed Link MDU output in phyparams() 2023-03-04 18:37:28 +01:00
Mark Qvist 777fea9cea Differentiate exception between link establishment callback, and internal RTT packet handling 2023-03-04 18:32:36 +01:00
Mark Qvist bbfdd37935 Added check for link state before sending 2023-03-04 18:31:07 +01:00
Mark Qvist 07484725a0 Updated documentation 2023-03-04 17:57:18 +01:00
Mark Qvist 709b126a67 Updated strings in Buffer example 2023-03-04 17:56:50 +01:00
Mark Qvist 28e6302b3d Updated versions 2023-03-04 17:56:30 +01:00
Mark Qvist 27861e96f8 Updated documentation 2023-03-03 22:16:13 +01:00
markqvist e36312a3cb
Merge pull request #250 from acehoss/feature/buffer
Buffer: send and receive binary data over Channel
2023-03-03 17:21:25 +01:00
Aaron Heise 5b5dbdaa91
Add example to documentation 2023-03-02 17:21:32 -06:00
Aaron Heise 99dc97365f
Merge remote-tracking branch 'origin/feature/buffer' into feature/buffer 2023-03-02 17:17:40 -06:00
Aaron Heise aac2b9f987
Buffer: send and receive binary data over Channel
(also some minor fixes in channel)
2023-03-02 17:17:18 -06:00
Aaron Heise 067c275c46
Buffer: send and receive binary data over Channel
(also some minor fixes in channel)
2023-03-02 17:13:55 -06:00
Mark Qvist 58004d7c05 Updated documentation 2023-03-02 12:47:55 +01:00
Mark Qvist aa0d9c5c13 Merge branch 'master' of github.com:markqvist/Reticulum 2023-03-02 12:05:06 +01:00
Mark Qvist 9e46950e28 Added output to echo example 2023-03-02 12:04:50 +01:00
markqvist a6551fc019
Merge pull request #246 from gdt/fix-transmit-hash
AutoInterface: Drop embedded scope identifier on fe80::
2023-03-02 11:34:00 +01:00
markqvist a06ae40797
Merge pull request #236 from faragher/master
Additional error messages for offline flashing.
2023-03-02 11:31:31 +01:00
markqvist 1db08438df
Merge pull request #248 from Erethon/hkdf-remove-dead-code
hkdf: Remove duplicate check if the salt is None
2023-03-02 11:29:18 +01:00
markqvist 89aa51ab61
Merge pull request #245 from acehoss/feature/channel
Channel: reliable delivery over Link
2023-03-02 11:27:15 +01:00
Dionysis Grigoropoulos ddb7a92c15 hkdf: Remove duplicate check if the salt is None
The second if isn't needed since we initialize the salt with zeroes
earlier. If instead we meant to pass an empty bytes class to the HMAC
implementation, the end result would be the same, since it's gonna get
padded with zeroes in the HMAC code.
2023-03-01 16:22:51 +02:00
Greg Troxel e273900e87 AutoInterface: Drop embedded scope identifier on fe80::
The code previously dropped scope identifiers expressed as a trailing
"%ifname", which happens on macOS.  On NetBSD and OpenBSD (and likely
FreeBSD, not tested), the scope identifier is embedded.  Drop that
form of identifier as well, because we keep address and ifname
separate, and because the scope identifier must not be part of
computing the hash of the address.

Resolves #240, failure to peer on NetBSD and OpenBSD.
2023-02-28 10:19:46 -05:00
Aaron Heise d2d121d49f
Fix broken Channel test 2023-02-28 08:38:36 -06:00
Aaron Heise 9963cf37b8
Fix exceptions on Channel shutdown 2023-02-28 08:38:23 -06:00
Aaron Heise 72300cc821
Revert "Only send proof if link is still active" 2023-02-28 08:24:13 -06:00
Aaron Heise 8168d9bb92
Only send proof if link is still active 2023-02-28 08:13:07 -06:00
Aaron Heise 8f0151fed6
Tidy up PR 2023-02-27 21:33:50 -06:00
Aaron Heise d3c4928eda
Tidy up PR 2023-02-27 21:31:41 -06:00
Aaron Heise 68f95cd80b
Tidy up PR 2023-02-27 21:30:13 -06:00
Aaron Heise 42935c8238
Make the PR have zero deletions 2023-02-27 21:15:25 -06:00
Aaron Heise 118acf77b8
Fix up documentation even more 2023-02-27 21:10:28 -06:00
Aaron Heise 661964277f
Fix up documentation for building 2023-02-27 19:05:25 -06:00
Aaron Heise 464dc23ff0
Add some internal documenation 2023-02-27 17:36:04 -06:00
Aaron Heise 44dc2d06c6
Add channel tests to all test suite
Also print name in each test
2023-02-26 11:47:46 -06:00
Aaron Heise c00b592ed9
System-reserved channel message types
- a message handler can return logical True to prevent subsequent message handlers from running
- Message types >= 0xff00 are reserved for system/framework messages
2023-02-26 11:39:49 -06:00
Aaron Heise e005826151
Allow channel message handlers to short circuit
- a message handler can return logical True to prevent subsequent message handlers from running
2023-02-26 11:23:38 -06:00
Aaron Heise a61b15cf6a
Added channel example 2023-02-26 07:26:12 -06:00
Aaron Heise fe3a3e22f7
Expose Channel on Link
Separates channel interface from link

Also added: allow multiple message handlers
2023-02-26 07:25:49 -06:00
Aaron Heise 68cb4a6740
Initial work on Channel 2023-02-25 18:23:25 -06:00
Mark Qvist 9f06bed34c Updated readme and roadmap 2023-02-23 17:27:05 +01:00
Mark Qvist 3b1936ef48 Added EPUB output to documentation build 2023-02-23 17:25:38 +01:00
Michael Faragher 5b3d26a90a Additional error messages for offline flashing. 2023-02-22 12:49:24 -06:00
markqvist b381a61be8
Update Changelog.md 2023-02-18 23:35:41 +01:00
Mark Qvist 1e2fa2068c Updated manual 2023-02-18 16:53:18 +01:00
Mark Qvist c604214bb9 Improved RNode reconnection when serial device disappears 2023-02-18 13:31:22 +01:00
Mark Qvist e738c9561a Updated manual 2023-02-17 21:53:07 +01:00
Mark Qvist 994d1c8ee5 Updated roadmap 2023-02-17 21:41:41 +01:00
Mark Qvist ce21800537 Merge branch 'master' of https://git.unsigned.io/markqvist/Reticulum 2023-02-17 21:33:04 +01:00
Mark Qvist d02cdd5471 Added JSON output to rnstatus 2023-02-17 21:29:35 +01:00
Mark Qvist 7018e412d5 Updated roadmap 2023-02-17 21:28:13 +01:00
Mark Qvist 94f7505076 Updated docs 2023-02-17 21:25:14 +01:00
Mark Qvist b82ecf047a Added Link establishment rate calculation 2023-02-17 09:54:18 +01:00
Mark Qvist f21b93403a Updated documentation 2023-02-17 09:53:27 +01:00
Mark Qvist 59c88bc43b Merge branch 'master' of github.com:markqvist/Reticulum 2023-02-15 12:53:37 +01:00
Mark Qvist 8e98c1b038 Updated roadmap 2023-02-15 12:51:41 +01:00
Mark Qvist 4d3570fe4c Updated version 2023-02-15 12:28:06 +01:00
markqvist 3706769c33
Updated link. Fixes #216. 2023-02-09 22:27:11 +01:00
Mark Qvist ce91c34b21 Merge branch 'master' of https://git.unsigned.io/markqvist/Reticulum 2023-02-09 16:22:39 +01:00
Mark Qvist e37aa5e51a Added contribution guidelines 2023-02-09 16:18:59 +01:00
Mark Qvist 80af0f4539 Updated roadmap 2023-02-09 14:07:30 +01:00
Mark Qvist fc818f00f1 Merge branch 'master' of github.com:markqvist/Reticulum 2023-02-09 11:54:06 +01:00
Mark Qvist a55d39b7d4 Added Link ID to response_generator callback signature 2023-02-09 11:52:54 +01:00
Mark Qvist 8e264359db Fixed link 2023-02-09 11:25:51 +01:00
markqvist cbaeaa9f81
Merge pull request #203 from Erethon/rnodeconf-typo
rnodeconf: Typo fix on board versions
2023-02-04 19:21:21 +01:00
Dionysis Grigoropoulos 323c2285ce rnodeconf: Typo fix on board versions 2023-02-04 17:16:57 +02:00
Mark Qvist 5b6d0ec337 Updated manual 2023-02-04 16:00:07 +01:00
Mark Qvist 2bbb0f5ec2 Fixed missing entrypoint 2023-02-04 15:59:58 +01:00
Mark Qvist e385c79abd Updated manual 2023-02-04 15:38:44 +01:00
Mark Qvist 86faf6c28d Updated roadmap 2023-02-04 15:36:11 +01:00
Mark Qvist 6d8a3f09e5 Updated readme 2023-02-04 15:35:55 +01:00
Mark Qvist 1e88a390f4 Updated manual 2023-02-04 14:28:28 +01:00
Mark Qvist e9ae255f84 Added fallback version URL to rnodeconf updater 2023-02-04 14:18:11 +01:00
Mark Qvist 42dfee8557 Added Bluetooth pairing PIN output 2023-02-04 13:45:12 +01:00
Mark Qvist 177e724457 Updated roadmap 2023-02-04 12:17:05 +01:00
Mark Qvist 1b55ac7f24 Added destination hash generation and announce functionality to rnid utility 2023-02-03 20:27:39 +01:00
Mark Qvist 5447ed85c1 Updated documentation 2023-02-03 11:32:54 +01:00
Mark Qvist d7aacba797 Cleanup 2023-02-03 10:13:36 +01:00
Mark Qvist b92ddeccff Cleanup 2023-02-03 08:29:32 +01:00
Mark Qvist 6fac96ec18 Mask entire header 2023-02-03 00:11:11 +01:00
Mark Qvist 53ceafcebd Improved IFAC mask derivation 2023-02-02 23:59:02 +01:00
Mark Qvist 4df67304d6 Added payload masking to interfaces with IFAC enabled 2023-02-02 20:48:52 +01:00
Mark Qvist ac07ba1368 Added Identity generation to rnid utility 2023-02-02 19:26:27 +01:00
Mark Qvist ece064d46e Updated version 2023-02-02 19:05:15 +01:00
Mark Qvist 86ae42a049 Updated docs 2023-02-02 19:04:52 +01:00
Mark Qvist 08e480387b Added signing and validation to rnid 2023-02-02 19:02:05 +01:00
Mark Qvist f4241ae9c2 Added basic rnid utility 2023-02-02 17:45:59 +01:00
Mark Qvist b6928b7d83 Merge branch 'master' of github.com:markqvist/Reticulum 2023-02-02 10:40:58 +01:00
markqvist 3b2fbe02c6
Merge pull request #189 from Erethon/master
Fix bug where announce_identity could be undefined
2023-02-02 10:41:42 +01:00
markqvist a38bde7801
Merge pull request #191 from Erethon/packet-header-fix
packet: Fix header_type matching according to IFAC
2023-02-02 10:22:44 +01:00
markqvist df132d1d59
Merge pull request #199 from Erethon/doc-fixes
docs: Fix typos, remove old info about rnsconfig
2023-02-02 10:16:13 +01:00
Mark Qvist 143f7fa683 Merge branch 'master' of github.com:markqvist/Reticulum 2023-02-02 10:15:41 +01:00
Dionysis Grigoropoulos feb614d186 docs: Fix typos, remove old info about rnsconfig 2023-02-01 22:30:56 +02:00
Mark Qvist 159be78f23 Updated docs 2023-02-01 15:44:23 +01:00
Mark Qvist 4a6c6568e2 Merge branch 'master' of github.com:markqvist/Reticulum 2023-02-01 13:45:05 +01:00
Mark Qvist e64fa08c74 Updated documentation. Fixes #197. 2023-02-01 13:44:00 +01:00
markqvist 6651976423
Merge pull request #193 from jooray/patch-1
Fix a typo
2023-01-28 23:10:14 +01:00
Juraj Bednar 5decf22b8b
Fix a typo
Fix documentation: rncp called instead of rnx in rnx example
2023-01-28 21:32:37 +01:00
Mark Qvist a731a8b047 Merge branch 'master' of https://git.unsigned.io/markqvist/Reticulum 2023-01-27 18:51:37 +01:00
Mark Qvist 9bb9571fc9 Updated documentation 2023-01-27 18:51:25 +01:00
Dionysis Grigoropoulos 6ecae615de packet: Fix header_type matching according to IFAC
Ever since IFAC/Interface Access Codes were introduced, the header type
is one bit long and not two.
2023-01-27 15:29:06 +02:00
Dionysis Grigoropoulos 72ca6316f6 Fix bug where announce_identity could be undefined 2023-01-26 22:05:38 +02:00
Mark Qvist 0f023cc533 Updated roadmap 2023-01-19 15:14:15 +01:00
Mark Qvist 9f9a4a14d3 Updated changelog 2023-01-14 21:02:01 +01:00
Mark Qvist 0609251270 Updated manual 2023-01-14 20:51:17 +01:00
Mark Qvist e4f0b2dc39 Allow rnodeconf to provision RNodes from extracted firmwares on systems without prior tools installed 2023-01-14 20:47:34 +01:00
Mark Qvist 2ef06f2bd3 Updated documentation 2023-01-14 20:46:32 +01:00
Mark Qvist c5a586175d Updated version 2023-01-14 15:06:30 +01:00
Mark Qvist 2a1ec6592c Added autoinstall and updating from extracted RNode Firmwares to rnodeconf 2023-01-14 14:51:44 +01:00
Mark Qvist eed7698ed3 Added firmware extraction from existing devices to rnodeconf 2023-01-14 13:20:19 +01:00
Mark Qvist 205c612a0f Updated roadmap 2023-01-14 10:22:21 +01:00
Mark Qvist 8d96673bec Updated flasher paths 2023-01-14 00:55:34 +01:00
Mark Qvist 62a13eb0e8 Added RNode Bootstrap Console info to rnodeconf autoinstaller 2023-01-14 00:28:34 +01:00
Mark Qvist 10d03753b5 Updated documentation 2023-01-13 12:00:12 +01:00
Mark Qvist f19b87759f Merge branch 'master' of https://git.unsigned.io/markqvist/Reticulum 2023-01-13 11:59:42 +01:00
Mark Qvist 04f009f57c Updated manual 2023-01-13 12:00:07 +01:00
Mark Qvist 78253093c7 Updated rnodeconf 2023-01-13 11:59:38 +01:00
Mark Qvist 63d54dbecb Added console image flashing to rnodeconf 2023-01-11 13:56:41 +01:00
Mark Qvist 32922868b9 Updated rnodeconf install guide 2023-01-11 11:45:10 +01:00
Mark Qvist e18f6d2969 Updated screenshots 2023-01-08 01:04:49 +01:00
Mark Qvist 08f4462ef8 Updated roadmap 2023-01-04 17:43:29 +01:00
Mark Qvist 7ed0726feb Updated documentation Getting Started section 2023-01-01 18:49:13 +01:00
Mark Qvist 2839d39350 Updated documentation images 2023-01-01 18:48:15 +01:00
Mark Qvist c992573257 Updated roadmap 2023-01-01 17:04:20 +01:00
Mark Qvist d64e547436 Updated roadmap 2022-12-29 15:18:10 +01:00
Mark Qvist 7eb0e03cb9 Updated roadmap 2022-12-29 15:17:00 +01:00
Mark Qvist f1deef696b Updated roadmap 2022-12-29 14:48:38 +01:00
Mark Qvist 48e14902d0 Updated roadmap 2022-12-29 14:42:45 +01:00
Mark Qvist 8acf63a195 Updated changelog 2022-12-29 14:40:48 +01:00
Mark Qvist 392bd65322 Added changelog 2022-12-29 14:35:55 +01:00
Mark Qvist 4ab3074d30 Updated roadmap 2022-12-29 14:33:08 +01:00
Mark Qvist 4de612e2fb Added release history to change log 2022-12-29 14:15:05 +01:00
Mark Qvist 3b192bfb47 Updated roadmap 2022-12-29 14:10:50 +01:00
Mark Qvist 0d562c89a7 Updated roadmap 2022-12-29 14:10:21 +01:00
Mark Qvist 972922fff1 Updated roadmap 2022-12-29 14:09:47 +01:00
Mark Qvist 296a2d91e8 Updated roadmap 2022-12-29 14:06:28 +01:00
Mark Qvist 446fb79786 Updated roadmap 2022-12-29 14:04:11 +01:00
Mark Qvist 700601d63e Updated documentation and manual 2022-12-23 23:32:38 +01:00
Mark Qvist 274c7199b0 Updated version 2022-12-23 23:27:37 +01:00
Mark Qvist 7960226883 Fixed missing path invalidation on failed link establishments made from a shared instance client 2022-12-23 23:26:50 +01:00
Mark Qvist bb74878e94 Reordered property assignment 2022-12-23 23:24:26 +01:00
Mark Qvist 549d22be68 Updated documentation and manual 2022-12-22 21:13:44 +01:00
Mark Qvist 5c2c935b6f Updated version 2022-12-22 21:08:02 +01:00
Mark Qvist 8402541c73 Faster roaming path recovery for multiple interface non-transport instances 2022-12-22 20:17:09 +01:00
Mark Qvist c34c268a6a Added carrier change detection flag to AutoInterface 2022-12-22 18:20:34 +01:00
Mark Qvist 8fcdc4613c Adjusted loglevels 2022-12-22 18:20:13 +01:00
Mark Qvist f645fa569b Fixed AutoInterface multicast echoes failing on interfaces with rolling MAC addresses on every re-connect 2022-12-22 17:46:46 +01:00
Mark Qvist 469947dab9 Updated manual 2022-12-22 15:49:47 +01:00
Mark Qvist 2386fc3635 Updated documentation and manual 2022-12-22 15:11:53 +01:00
Mark Qvist e9e98a00c2 Updated version 2022-12-22 15:07:36 +01:00
Mark Qvist b305eb8e0a Improved path response handling. Prepared destination path response handling for multi-path Transport. 2022-12-22 11:28:56 +01:00
Mark Qvist dd7931d421 Added signal quality stats to announce log output 2022-12-22 11:26:59 +01:00
Mark Qvist 191dce1301 Updated manual 2022-12-20 21:13:23 +01:00
Mark Qvist 3b5a27ba60 Updated readme 2022-12-20 21:08:08 +01:00
Mark Qvist 3c91f7f18b Updated documentation 2022-12-20 20:57:49 +01:00
Mark Qvist 171457713b Improved RNode hotplug over Bluetooth on Android 2022-12-20 15:17:46 +01:00
Mark Qvist 67ee8d6aab Added originator check to path rediscovery on failed links 2022-12-19 01:31:00 +01:00
Mark Qvist 13fa7d49d9 Added automatic path rediscovery on failed link establishments 2022-12-19 01:15:49 +01:00
Mark Qvist 66d921e669 Improved resource advertisement retry handling 2022-12-19 01:10:34 +01:00
Mark Qvist 85f60ea04e Added check for already transferring resource to Link class 2022-12-19 01:04:49 +01:00
Mark Qvist 4870e741f6 Added link request proof signature validation for every transport hop 2022-12-18 21:27:14 +01:00
Mark Qvist f71c1986af Added Heltec USB issue notice to autoinstaller 2022-12-16 23:34:31 +01:00
Mark Qvist 30d8e351dd Updated version 2022-12-16 23:21:22 +01:00
Mark Qvist 5e62e3bc22 Merge branch 'master' of https://git.unsigned.io/markqvist/Reticulum 2022-12-15 21:17:16 +01:00
Mark Qvist 1a67e276ad Updated broken link. Fixes #174. Thanks @mkinney! 2022-12-15 21:16:20 +01:00
Mark Qvist df37a4a884 Updated broken link 2022-12-15 21:15:47 +01:00
Mark Qvist d26bbbd59f Merge branch 'master' of https://git.unsigned.io/markqvist/Reticulum 2022-12-15 17:14:15 +01:00
Mark Qvist 2a264fa7d6 Fixed invalid driver proxy for Qinheng CH34x chips on Android 2022-12-15 17:14:09 +01:00
Mark Qvist d5e0a461cf Fixed invalid check for None 2022-11-25 00:42:22 +01:00
Mark Qvist e28dbd4afa Updated manual 2022-11-24 17:48:04 +01:00
Mark Qvist 8626dcd69f Updated roadmap 2022-11-24 17:30:01 +01:00
Mark Qvist e34f21f4dc Updated roadmap 2022-11-24 17:29:25 +01:00
Mark Qvist f692e81b8e Fixed AutoInterface roaming on Android devices that rotate Ethernet/WiFi MAC addresses on reconnect 2022-11-24 17:19:01 +01:00
Mark Qvist 28e43b52f9 Updated manual 2022-11-24 17:16:43 +01:00
Mark Qvist 680d17fb98 Improved startup time for instances and programs connected to a shared instance 2022-11-24 13:28:22 +01:00
Mark Qvist 1e477c976c Updated documentation 2022-11-24 12:32:43 +01:00
Mark Qvist ab301cdb79 Updated version 2022-11-24 10:45:45 +01:00
Mark Qvist cecb4b3acb Fixed buffered input stream reader not working on Android API levels < 30 2022-11-23 20:39:49 +01:00
Mark Qvist de53a105a4 Improved time pretty-print function 2022-11-23 17:15:46 +01:00
Mark Qvist 9e4ae3c6fe Updated roadmap 2022-11-22 20:20:23 +01:00
Mark Qvist 3482d84bc0 Updated manual 2022-11-17 18:19:42 +01:00
Mark Qvist 51c5c85fcd Updated readme 2022-11-17 16:51:59 +01:00
Mark Qvist 57aeab43a2 Updated readme and roadmap 2022-11-17 12:39:09 +01:00
Mark Qvist 92cccddaab Updated readme and roadmap 2022-11-17 12:36:41 +01:00
Mark Qvist 3de182192a Updated readme and roadmap 2022-11-17 12:35:21 +01:00
Mark Qvist aca6b0c110 Added roadmap 2022-11-17 12:25:48 +01:00
Mark Qvist 3d6e7a9597 Updated docs 2022-11-14 11:25:47 +01:00
markqvist 21da55dd39
Merge pull request #154 from thatv/master
Fixed Hop-number in docs
2022-11-14 11:23:10 +01:00
thatv 9e664af1c6
Update understanding.html 2022-11-12 21:37:27 +01:00
Mark Qvist 7736ed589e Updated manual 2022-11-03 23:08:37 +01:00
Mark Qvist f22504d080 Improved I2P recovery time on unresponsive tunnels 2022-11-03 22:47:08 +01:00
Mark Qvist f22e5cc200 Fixed socket references. Closes #146. 2022-11-03 19:51:04 +01:00
Mark Qvist 87b73b6c67 Updated docs 2022-11-03 19:48:39 +01:00
Mark Qvist 36906f6567 Updated version 2022-11-03 18:05:13 +01:00
Mark Qvist 52edb54d21 Updated readme 2022-11-03 18:05:04 +01:00
Mark Qvist 88b88b9b64 Fixed missing check for socket state 2022-11-03 18:03:00 +01:00
Mark Qvist 76fcad0b53 Added better I2P state visibility to rnstatus util 2022-11-03 17:49:25 +01:00
Mark Qvist 01e520b082 Adjusted I2P interface timings 2022-11-03 16:30:07 +01:00
Mark Qvist 1d2a0fe4c8 Improved I2P tunnel state detection. Fixed missing IFAC init on spawned I2P interfaces. 2022-11-03 15:22:34 +01:00
Mark Qvist 0f19ced9d3 Fixed missing IFAC identity init on spawned TCP clients. Closes #137. 2022-11-03 14:16:00 +01:00
Mark Qvist 4ca32c039d Updated documentation 2022-11-03 12:08:23 +01:00
Mark Qvist 81ec701240 Updated version 2022-11-03 12:05:10 +01:00
Mark Qvist b16d614495 Updated readme 2022-11-03 12:04:54 +01:00
Mark Qvist 5f7e37187f Fixed local firmware cache location for rnodeconf 2022-11-03 12:03:26 +01:00
Mark Qvist 622fd6cf46 Updated docs 2022-11-03 00:45:53 +01:00
Mark Qvist b9d73518dd Improved rnodeconf firmware install 2022-11-03 00:42:46 +01:00
Mark Qvist 17bdf45ac1 Updated documentation 2022-11-02 22:46:47 +01:00
Mark Qvist 36052e2c61 Updated version 2022-11-02 22:34:52 +01:00
Mark Qvist 06d232f889 Added Bluetooth control interface for RNode interfaces on Android 2022-11-02 22:34:07 +01:00
Mark Qvist f9b3c749e0 Improved cleanup on device disconnect 2022-11-02 20:44:09 +01:00
Mark Qvist 63a59753af Implemented Bluetooth support for RNode interfaces on Android. Added Bluetooth/USB multiplexing and Bluetooth manager to interface. 2022-11-02 20:43:46 +01:00
Mark Qvist 20696e7827 Bluetooth support for RNode interfaces on Linux (via standard rfcomm driver) 2022-11-02 20:42:45 +01:00
Mark Qvist 127c9862da Updated manual 2022-11-02 01:31:32 +01:00
Mark Qvist fee9473cac Improved rnodeconf timings 2022-11-02 01:23:23 +01:00
Mark Qvist 5337b72853 Updated manual 2022-11-01 23:54:28 +01:00
Mark Qvist 9bc5d91106 Added rnodeconf to package 2022-11-01 22:40:09 +01:00
Mark Qvist 45ae66e9bf Updated bluetooth control commands for RNode interface 2022-11-01 20:27:41 +01:00
Mark Qvist f03cf34370 Updated documentation 2022-11-01 20:27:11 +01:00
Mark Qvist 47db2a3bd5 Added log output control options 2022-11-01 20:26:55 +01:00
Mark Qvist 40cd961eab Added better teardown handling on RNodeInterfaces 2022-10-30 23:13:44 +01:00
Mark Qvist 34cdd4bf0f Improved RNode error reporting and teardown 2022-10-29 16:41:47 +02:00
Mark Qvist b0ef58e5ca Added support for writing to display framebuffer of connected RNodes 2022-10-29 14:28:53 +02:00
Mark Qvist b6020b5ea8 Updated version 2022-10-29 14:28:06 +02:00
Mark Qvist ee544fcf31 Updated documentation 2022-10-22 01:43:51 +02:00
Mark Qvist 886b0ac0ca Fixed Android interfaces import 2022-10-22 01:38:38 +02:00
Mark Qvist ed4070a3d1 Removed stray import. Fixes #125. 2022-10-22 01:05:08 +02:00
Mark Qvist 6d6568852a Updated docs and manual 2022-10-20 20:15:31 +02:00
Mark Qvist b479e14ca5 Improved handling of Android interfaces in apps without hardware access 2022-10-20 20:10:50 +02:00
Mark Qvist 8fec5cedbe Updated readme 2022-10-20 14:52:11 +02:00
Mark Qvist 9852a3534b Updated manual and documentation 2022-10-20 14:39:49 +02:00
Mark Qvist 81fc920bdf Fixed AutoInterface peering hashes on WiFi devices that employ MAC address randomisation on reconnects and roaming 2022-10-19 11:57:09 +02:00
Mark Qvist 5b1b18e84a Fixed incorrect behaviour in announce processing for instance-local destinations to roaming- or boundary-mode interfaces 2022-10-18 18:24:29 +02:00
Mark Qvist 9c8c143c62 Added logging to announce processing 2022-10-18 17:44:14 +02:00
Mark Qvist db9858d75f Cleanup 2022-10-16 00:11:40 +02:00
Mark Qvist 874405cbdd Fixed missing announce cap on hotplugged interfaces 2022-10-15 23:14:47 +02:00
Mark Qvist 2a3f2b8bdc Updated version 2022-10-15 14:57:57 +02:00
Mark Qvist 9aae06c694 Added Android-specific KISS interface 2022-10-15 14:57:16 +02:00
Mark Qvist 70ffc38c49 Android-specific import 2022-10-15 14:56:23 +02:00
Mark Qvist 73071b0755 Cleanup 2022-10-15 14:41:12 +02:00
Mark Qvist ab697dc583 Android-specific import 2022-10-15 11:39:23 +02:00
Mark Qvist ecc78fa45f Added Android serial interface 2022-10-15 11:36:18 +02:00
Mark Qvist e5309caf48 Added Android serial interface 2022-10-15 11:33:48 +02:00
Mark Qvist 094d2f2079 Cleanup 2022-10-15 11:31:34 +02:00
Mark Qvist 5a917c9dac Updated readme 2022-10-14 15:41:30 +02:00
Mark Qvist 1df0eea0b7 Updated readme 2022-10-14 15:31:17 +02:00
Mark Qvist 718c3577db Updated readme 2022-10-14 15:28:41 +02:00
Mark Qvist 5111c32854 Fixed help text 2022-10-13 23:10:38 +02:00
Mark Qvist 63d4e9a399 Updated readme 2022-10-13 23:10:15 +02:00
Mark Qvist 60773ceb16 Return public identity for registered destinations in Identity.recall() 2022-10-13 20:43:38 +02:00
Mark Qvist 5d6c3dd891 Cleanup 2022-10-12 18:56:30 +02:00
Mark Qvist a564dd2b2d Cleanup 2022-10-12 18:06:21 +02:00
Mark Qvist 16cf1ab1ba Fix debug output 2022-10-12 16:08:48 +02:00
Mark Qvist 47e326c8a9 Import Android-specific RNode interface on Android 2022-10-12 16:08:29 +02:00
Mark Qvist 9a7585cbef Added platform detect function 2022-10-12 16:07:53 +02:00
Mark Qvist 902f7af64d Added platform check 2022-10-12 15:14:42 +02:00
Mark Qvist 004bf27526 Added Android-specific RNode interface. Contains debug code. Not ready yet. Hang in there. 2022-10-12 15:11:02 +02:00
Mark Qvist 9cad90266e Reverted RNode interface to exclude Android-specific logic 2022-10-12 15:00:21 +02:00
Mark Qvist e9de01e10e Added property default 2022-10-12 14:58:00 +02:00
Mark Qvist 372bedcd85 Added support for RNode interfaces on Android 2022-10-11 14:06:42 +02:00
Mark Qvist 1141a3034d Updated documentation 2022-10-07 01:00:15 +02:00
Mark Qvist 3f3276ca45 Updated documentation 2022-10-06 23:32:19 +02:00
Mark Qvist 6e742f7267 Updated documentation 2022-10-06 23:22:30 +02:00
Mark Qvist d3525943c2 Updated version 2022-10-06 23:16:01 +02:00
Mark Qvist cb55189e5c Truncate name_hash to 80 bits. Take all array slices from Identity.NAME_HASH_LENGTH constant. 2022-10-06 23:14:32 +02:00
Mark Qvist 0b98a9bff4 Updated docs and manual 2022-10-06 19:11:05 +02:00
Mark Qvist a8d6e1780a Merge branch 'master' of github.com:markqvist/Reticulum 2022-10-06 17:42:11 +02:00
Mark Qvist cb9840250a Updated docs and manual 2022-10-06 17:41:07 +02:00
Mark Qvist 16f8725906 Updated docs and manual 2022-10-06 17:35:38 +02:00
markqvist 2656157462
Update README.md 2022-10-04 23:22:46 +02:00
markqvist c9c7469b32
Update README.md 2022-10-04 23:22:05 +02:00
markqvist 0f429e2385
Update README.md 2022-10-04 23:18:44 +02:00
Mark Qvist 89d8342ce5 Improved logging. Reject mismatching keys on hash collision. 2022-10-04 22:42:59 +02:00
Mark Qvist c18997bf5b Cleanup 2022-10-04 22:41:58 +02:00
Mark Qvist 1e4dd9d6f0 Added note 2022-10-04 22:40:43 +02:00
Mark Qvist b296c10541 Added check for app_data 2022-10-04 22:40:03 +02:00
Mark Qvist 9065de5fb4 Updated docs and manual 2022-10-04 09:34:52 +02:00
Mark Qvist 7997fd104e Fix destination hash construction and random blob extraction 2022-10-04 09:11:20 +02:00
Mark Qvist 11667504b2 Updated docs 2022-10-04 09:06:29 +02:00
Mark Qvist 7744c4ffe6 Updated version 2022-10-04 07:00:13 +02:00
Mark Qvist 8a61d2c8d5 Fixed missing validation in announce processing 2022-10-04 06:59:33 +02:00
Mark Qvist 1380016995 Updated tests 2022-10-04 06:55:50 +02:00
markqvist f2aff3fbd5
Update README.md 2022-09-30 22:54:51 +02:00
Mark Qvist b859984ebe Updated manual 2022-09-30 21:54:51 +02:00
Mark Qvist 9593b1c295 Updated readme 2022-09-30 21:23:30 +02:00
Mark Qvist 3d6455fb37 Updated manual 2022-09-30 21:16:22 +02:00
Mark Qvist b085127d6e Fixed config dir path 2022-09-30 20:41:11 +02:00
Mark Qvist 80ffa5ebc3 Updated manual 2022-09-30 20:38:27 +02:00
Mark Qvist 76fb73f46c Updated configuration path defaults 2022-09-30 20:37:46 +02:00
Mark Qvist e51b0077c7 Improved configuration info in docs 2022-09-30 20:37:03 +02:00
Mark Qvist c18806c912 Updated deprecated threading API call and updated docs 2022-09-30 19:02:41 +02:00
Mark Qvist 683881d6cd Updated documentation 2022-09-30 18:50:35 +02:00
Mark Qvist f62d9946ac Updated documentation and manual 2022-09-30 18:43:04 +02:00
Mark Qvist 893a463663 Updated docs and manual 2022-09-30 14:22:33 +02:00
Mark Qvist 39b788867d Updated docs and manual 2022-09-30 13:09:10 +02:00
Mark Qvist 2abd8a1aae Updated docs and manual 2022-09-30 11:26:51 +02:00
Mark Qvist 7940ac0812 Updated docs and manual 2022-09-30 11:15:34 +02:00
Mark Qvist 3f2075da6f Updated manual and documentation 2022-09-30 00:02:15 +02:00
Mark Qvist e90b2866b4 Updated readme and documentation 2022-09-29 23:20:49 +02:00
Mark Qvist 8886ed5794 Fixed missing destination-side ephemeral key generation in link establishment 2022-09-29 22:47:10 +02:00
Mark Qvist 32ee4216fd Changed log levels 2022-09-24 12:23:59 +02:00
Mark Qvist 571ad2c8fb Added initial connection timeout option to TCPClientInterface 2022-09-15 15:35:28 +02:00
Mark Qvist 0c47ff1ccc Updated documentation and manual 2022-09-14 18:39:39 +02:00
Mark Qvist 18f450c58b Periodically try to connect RNodes that were unavailable at startup. Closes #87. 2022-09-14 17:43:07 +02:00
Mark Qvist b3d85b583f Place config in .config dir by default 2022-09-14 16:21:34 +02:00
Mark Qvist 03695565ba Added rnsd warning on start as client 2022-09-14 00:13:20 +02:00
Mark Qvist 3e380a8fc7 Fixed rendering in rnpath utility 2022-09-14 00:07:23 +02:00
Mark Qvist fd35451927 Updated readme 2022-09-14 00:04:00 +02:00
Mark Qvist 921987c999 Added table persist on local client disconnect 2022-09-13 22:32:00 +02:00
Mark Qvist 81e0989070 Updated readme 2022-09-13 22:30:28 +02:00
Mark Qvist 3fa7698438 Updated readme 2022-09-13 21:06:07 +02:00
Mark Qvist 75e32af1c5 Added periodic data persistence for shared and standalone instances 2022-09-13 20:17:25 +02:00
Mark Qvist 9775893840 Improved known destination saving 2022-09-06 19:43:46 +02:00
Mark Qvist e5c0ee4153 Added build variant to makefile 2022-09-06 19:42:50 +02:00
Mark Qvist 4042dd6ef7 Added locking and timeouts to table saving routines 2022-09-06 18:05:02 +02:00
Mark Qvist af538e0489 Improved shutdown handling and table saving 2022-09-06 17:42:13 +02:00
Mark Qvist 8f4cf433ba Updated docs 2022-09-06 16:50:33 +02:00
Mark Qvist c55e1e9628 Version bump 2022-09-06 12:24:46 +02:00
Mark Qvist be02586133 Added detach handler to TCP Server Interface 2022-09-06 12:23:52 +02:00
Mark Qvist 6db742ade7 Updated documentation 2022-08-25 11:00:30 +02:00
Mark Qvist 6a53298aa2 Merge branch 'master' of github.com:markqvist/Reticulum 2022-08-25 10:59:47 +02:00
Mark Qvist f00b6a6fdb Updated roadmap 2022-08-25 10:59:43 +02:00
markqvist dc0a0735db
Merge pull request #90 from huyndao/huy-proofread
Huy proofread
2022-08-12 10:40:13 +02:00
huyndao b230edd21d Fixed additional spelling errors 2022-08-05 17:54:14 -04:00
huyndao 30e75b1bfb Fixed spelling errors 2022-08-05 17:38:23 -04:00
Mark Qvist 7f70ffdc21 Updated documentation 2022-07-09 15:52:24 +02:00
Mark Qvist 6e6b49dcd2 Added extra resource transfer test 2022-07-09 15:50:18 +02:00
Mark Qvist 383f96d82a Updated version 2022-07-09 15:46:42 +02:00
Mark Qvist ebef2da7a8 Fixed incorrect allocation size in resource advertisements after switching to 128-bit address space 2022-07-09 15:46:19 +02:00
Mark Qvist 4946d9f2eb Updated readme 2022-07-08 17:05:02 +02:00
Mark Qvist fcb61e3ebf Updated documentation and manual 2022-07-08 12:14:03 +02:00
Mark Qvist eae788957a Updated version 2022-07-08 12:01:13 +02:00
Mark Qvist 045a9d8451 Fixed a race condition in link establishment flow 2022-07-08 11:14:35 +02:00
Mark Qvist da644d33ea Updated documentation 2022-07-08 00:32:03 +02:00
Mark Qvist e03fc38920 Updated documentation 2022-07-08 00:28:41 +02:00
Mark Qvist c36c0368ef Updated manual 2022-07-08 00:24:00 +02:00
Mark Qvist 3d979e2d65 Added Android compatibility to AES proxy class 2022-07-08 00:22:30 +02:00
Mark Qvist 5158613501 Fixed missing config section check 2022-07-08 00:21:48 +02:00
Mark Qvist b53185779a Updated documentation 2022-07-08 00:21:23 +02:00
Mark Qvist 5b63f84491 Updated readme 2022-07-05 00:45:58 +02:00
Mark Qvist fd2cc1231f Updated readme 2022-07-04 23:55:43 +02:00
Mark Qvist 76950ee3de Updated manual 2022-07-04 23:46:21 +02:00
markqvist 8565b2fdf5
Update README.md 2022-07-03 09:29:35 +02:00
markqvist 2a915eab2d
Merge pull request #76 from joshuafuller/master
Fix some minor spelling errors
2022-07-03 09:28:40 +02:00
Joshua Fuller 36654c1414
Fix some minor spelling errors 2022-07-02 17:32:39 -05:00
Mark Qvist fdf0456cf0 Updated readme and docs 2022-07-02 18:41:57 +02:00
Mark Qvist 8cff18f8ce Improved cache handling 2022-07-02 15:15:47 +02:00
Mark Qvist 5e072affe4 Changed job timing 2022-07-02 13:34:17 +02:00
Mark Qvist fc4c7638a6 Added cache job scheduler 2022-07-02 13:24:07 +02:00
Mark Qvist 532f9ee665 Added cache cleaning 2022-07-02 13:12:54 +02:00
Mark Qvist 4a725de935 Improved rnx interactive mode 2022-07-02 10:38:35 +02:00
Mark Qvist 2335a71427 Fixed --no-auth option in rncp 2022-07-02 09:48:15 +02:00
Mark Qvist 3e70dd6134 Fixed --no-auth option in rncp 2022-07-02 09:33:05 +02:00
Mark Qvist 474521056b Updated packet sizes in docs 2022-07-02 08:45:57 +02:00
Mark Qvist d33154bfdb Cleanup 2022-07-02 08:45:40 +02:00
Mark Qvist 8f82a2b87f Updated documentation to reflect 128-bit address space 2022-07-01 23:34:02 +02:00
Mark Qvist 304610c682 Updated documentation to reflect 128-bit address space 2022-07-01 23:30:20 +02:00
Mark Qvist bc39a1acf1 Fixed static size index 2022-07-01 21:16:01 +02:00
Mark Qvist 20b7278f7b Updated documentation 2022-07-01 21:15:15 +02:00
Mark Qvist 1f66a9b0c0 Updated readme 2022-07-01 21:13:55 +02:00
Mark Qvist f464ecfcb5 Updated website 2022-07-01 17:31:07 +02:00
Mark Qvist 49fdeb9bc4 Updated readme 2022-07-01 17:30:51 +02:00
Mark Qvist 40560a31f2 Version updated 2022-07-01 10:27:31 +02:00
Mark Qvist f7d8a4b3b3 Updated tests 2022-06-30 20:37:51 +02:00
Mark Qvist c498bf5668 Updated examples 2022-06-30 20:07:48 +02:00
Mark Qvist 2e19304ebf Fixed static length calculation in proof destination generation 2022-06-30 19:33:35 +02:00
Mark Qvist 1cd7c85a52 Cleanup 2022-06-30 19:32:47 +02:00
Mark Qvist 171f43f4e3 Cleanup 2022-06-30 19:32:29 +02:00
Mark Qvist 09a1088437 Added description about Fernet modifications 2022-06-30 19:32:08 +02:00
Mark Qvist 6346bc54a8 Updated readme 2022-06-30 19:31:13 +02:00
Mark Qvist 40e25d8e40 Fixed static destination size 2022-06-30 19:12:44 +02:00
Mark Qvist e19438fdcc Added license headers 2022-06-30 19:10:51 +02:00
markqvist d85ea07b5e
Update README.md 2022-06-30 14:07:58 +02:00
Mark Qvist 4dda0e8a5b Updated readme 2022-06-30 14:06:55 +02:00
Mark Qvist 5faf13d505 Expanded address space to 128 bits 2022-06-30 14:02:57 +02:00
Mark Qvist 2be1c7633d Updated readme 2022-06-27 20:17:23 +02:00
Mark Qvist 6ac2f437b9 Updated documentation 2022-06-22 23:26:08 +02:00
Mark Qvist 2fe9dec459 Updated documentation 2022-06-22 16:34:43 +02:00
Mark Qvist 8f8da080f5 Updated documentation 2022-06-22 16:20:01 +02:00
Mark Qvist 01a973db91 Updated documentation 2022-06-22 16:13:26 +02:00
Mark Qvist 1c4528dca1 Updated documentation 2022-06-22 16:10:54 +02:00
Mark Qvist a99031873d Updated documentation 2022-06-22 16:04:44 +02:00
Mark Qvist ab1186eaf7 Updated documentation 2022-06-22 15:48:45 +02:00
Mark Qvist 940c889440 Updated manual 2022-06-22 15:19:45 +02:00
Mark Qvist ac7c36029b Updated documentation 2022-06-22 15:19:18 +02:00
Mark Qvist c79811e040 Updated makefile 2022-06-22 10:12:05 +02:00
Mark Qvist 7545613c52 Updated documentation 2022-06-22 10:08:27 +02:00
Mark Qvist 7bd6da034a Updated readme 2022-06-22 10:00:43 +02:00
Mark Qvist 34f10d1196 Updated readme 2022-06-16 19:58:34 +02:00
Mark Qvist be84e8a731 Updated readme 2022-06-16 19:53:17 +02:00
Mark Qvist 7331bd2c09 Updated makefile 2022-06-14 13:45:48 +02:00
Mark Qvist 6bfd0bf4eb Resource profiling with yappi instead of cprofile 2022-06-14 13:44:12 +02:00
Mark Qvist 3013c10180 Updated readme 2022-06-13 17:28:03 +02:00
Mark Qvist 95a34dad4b Updated readme 2022-06-13 17:25:13 +02:00
Mark Qvist a3bc2ef38f Updated readme 2022-06-13 17:24:35 +02:00
Mark Qvist aa255d0713 Tuned I2PInterface socket timeouts 2022-06-13 15:45:53 +02:00
Mark Qvist 5a8152c589 Fixed I2PInterface status not being set on connectable interfaces 2022-06-12 21:34:54 +02:00
Mark Qvist 8a24dbae40 Added filter option to rnstatus utility 2022-06-12 19:08:47 +02:00
Mark Qvist 2f1329e581 Updated docs version 2022-06-12 18:57:08 +02:00
Mark Qvist 2166294a7a Optimised resource transfer speed on faster links 2022-06-12 18:56:49 +02:00
Mark Qvist 8042f5eaa1 Improved log output 2022-06-12 18:55:06 +02:00
Mark Qvist 1b1ab42aaa Updated readme 2022-06-12 13:28:16 +02:00
Mark Qvist ae8fcb88d8 Resource timeout tuning 2022-06-12 13:28:05 +02:00
Mark Qvist 98b232bc4c Updated link test 2022-06-12 11:58:54 +02:00
Mark Qvist d7a444556a Tuned TCP socket options 2022-06-12 11:50:09 +02:00
Mark Qvist 58eaceb48c Updated docs 2022-06-12 11:49:37 +02:00
Mark Qvist 3c81f93d4a Added link accept option to API 2022-06-12 11:49:24 +02:00
Mark Qvist 2685e043ea Fixed missing check for zero-length packets on IFAC-enabled interfaces. Fixes #65. 2022-06-11 18:52:33 +02:00
Mark Qvist 214ee9d771 Updated readme 2022-06-11 15:03:14 +02:00
Mark Qvist d39c1893e7 Cleanup 2022-06-11 14:11:58 +02:00
Mark Qvist 548cbd50d8 Improved I2PInterface error handling and stability 2022-06-11 13:52:56 +02:00
Mark Qvist 6b06875c42 Fixed potential undefined variable 2022-06-11 13:42:08 +02:00
Mark Qvist d7262c7cbe Fixed socket leak in I2PInterface 2022-06-11 11:27:01 +02:00
Mark Qvist d9a021465e Updated readme 2022-06-10 21:44:17 +02:00
Mark Qvist 8451bbe7e6 Tuned resource window 2022-06-10 18:17:48 +02:00
Mark Qvist 1ac7238347 Cleanup 2022-06-10 17:05:00 +02:00
Mark Qvist ea7762cbc0 Updated makefile 2022-06-10 16:37:02 +02:00
Mark Qvist c4a7d17b2f Updated tests 2022-06-10 16:36:30 +02:00
Mark Qvist c758c4d279 Updated readme 2022-06-10 13:14:16 +02:00
Mark Qvist d136eac32b Updated readme 2022-06-10 13:13:51 +02:00
Mark Qvist f74e6d12c9 Updated readme 2022-06-10 13:13:15 +02:00
Mark Qvist 6f68d6edc4 Updated readme 2022-06-10 13:12:07 +02:00
Mark Qvist 076d6b09c4 Updated makefile 2022-06-10 12:54:12 +02:00
Mark Qvist 8c484c786f Updated makefile 2022-06-10 12:50:48 +02:00
Mark Qvist 363d56d49d Enabled pure-python build 2022-06-10 12:46:20 +02:00
markqvist 2a581a9a9b
Update README.md 2022-06-10 12:19:31 +02:00
Mark Qvist 2779852417 Updated readme 2022-06-10 12:15:49 +02:00
Mark Qvist e0f69344c2 Updated readme 2022-06-10 12:15:01 +02:00
Mark Qvist 469c9919cb Updated readme 2022-06-10 12:13:36 +02:00
Mark Qvist 6518370d79 Updated readme 2022-06-10 12:13:03 +02:00
Mark Qvist ffe61e701a Updated readme 2022-06-10 12:12:26 +02:00
Mark Qvist 7f65c767f0 Updated readme 2022-06-10 12:11:43 +02:00
Mark Qvist 157a54d4a4 Updated readme 2022-06-10 11:45:40 +02:00
Mark Qvist c8c0f77c81 Updated readme 2022-06-10 11:37:30 +02:00
Mark Qvist 4c3a82cf20 Updated readme 2022-06-10 11:36:32 +02:00
Mark Qvist 1ec83b535f Updated readme 2022-06-10 11:34:57 +02:00
Mark Qvist 31914a10aa Updated readme 2022-06-10 11:34:18 +02:00
Mark Qvist 6e369bf82f Updated readme 2022-06-10 11:33:54 +02:00
Mark Qvist 39059a365d Updated readme 2022-06-10 11:33:21 +02:00
Mark Qvist 0b2dba7977 Updated readme 2022-06-10 11:32:57 +02:00
Mark Qvist c6e2ba2cf3 Updated readme 2022-06-10 11:32:10 +02:00
Mark Qvist c5918395de Updated readme 2022-06-10 11:31:33 +02:00
Mark Qvist 861ac92c4c Updated readme 2022-06-10 11:29:39 +02:00
Mark Qvist 715e35d626 Updated readme 2022-06-10 11:28:59 +02:00
Mark Qvist a8ea7bcca6 Updated tests 2022-06-10 11:27:52 +02:00
Mark Qvist 534a8825eb Updated setup.py 2022-06-10 11:27:31 +02:00
Mark Qvist 89f3c0f649 Updated readme 2022-06-10 11:26:46 +02:00
Mark Qvist e4a82d5358 Updated link test 2022-06-09 21:49:13 +02:00
Mark Qvist 68cd79768b Added internal python-only AES-128-CBC implementation 2022-06-09 21:13:34 +02:00
Mark Qvist 701c624d0a Updated Identity tests 2022-06-09 21:12:26 +02:00
Mark Qvist ec90af750d Updated link tests 2022-06-09 19:54:20 +02:00
Mark Qvist 2c1b3a0e5b Optimised resource performance over varied network topologies 2022-06-09 19:29:33 +02:00
Mark Qvist 02968baa76 Added establishment cost property to Link 2022-06-09 19:28:31 +02:00
Mark Qvist 06fefebc08 Updated tests 2022-06-09 19:27:11 +02:00
Mark Qvist 513a82e363 Updated link test 2022-06-09 17:14:43 +02:00
Mark Qvist a4b80e7ddb Updated link test 2022-06-09 17:07:44 +02:00
Mark Qvist be6910e4e0 Work on Resource optimisation 2022-06-09 17:00:27 +02:00
Mark Qvist 0a8b755230 Transport optimisations 2022-06-09 16:54:47 +02:00
Mark Qvist d334613888 Removed delay 2022-06-09 16:48:31 +02:00
Mark Qvist 14bdcaf770 Added size print function 2022-06-09 14:46:36 +02:00
Mark Qvist 592c405067 Cleanup 2022-06-09 14:46:02 +02:00
Mark Qvist bb8012ad50 Updated test output 2022-06-09 14:45:30 +02:00
Mark Qvist 648e9a68b8 Added profiling info to LocalInterface 2022-06-09 14:45:00 +02:00
Mark Qvist 8c167b8f3d Updated tests 2022-06-09 13:32:32 +02:00
Mark Qvist bd933dc1df Updated gitignore 2022-06-09 13:30:19 +02:00
Mark Qvist 76f12b4854 Updated gitignore 2022-06-09 10:33:30 +02:00
Mark Qvist 30af212217 Added tests for Link 2022-06-09 10:33:13 +02:00
Mark Qvist 6c22ccc6d4 Updated makefile 2022-06-09 10:31:48 +02:00
Mark Qvist 26dae3830e Fixed unclosed socket in AutoInterface 2022-06-09 08:48:55 +02:00
Mark Qvist a776d59f03 Updated hashes tests 2022-06-08 23:32:56 +02:00
Mark Qvist 5b20caf759 Added tests for Identity 2022-06-08 23:28:55 +02:00
Mark Qvist a800ce43f3 Tests cleanup 2022-06-08 22:27:26 +02:00
Mark Qvist 7916b8e7f4 Automatic switch to internal backend on missing PyCA module 2022-06-08 21:25:46 +02:00
Mark Qvist 60e3c7348a Updated readme 2022-06-08 21:05:03 +02:00
Mark Qvist cc9970c83e Added tests for hashes 2022-06-08 21:04:29 +02:00
Mark Qvist c46b98f163 Added python-only fallback for SHA-256 and SHA-512 2022-06-08 21:03:58 +02:00
Mark Qvist 86061f9f47 Cleanup 2022-06-08 19:47:51 +02:00
Mark Qvist e0b795b4d0 Added internal python-only implementation of Ed25519 2022-06-08 19:47:09 +02:00
Mark Qvist 34efbc6100 Cleanup 2022-06-08 17:05:15 +02:00
Mark Qvist 94edc8eff3 Implemented proxies to pyca X25519 2022-06-08 17:03:40 +02:00
Mark Qvist e2aeb56c12 Renamed file 2022-06-08 15:54:48 +02:00
Mark Qvist 9a4325ce8e Constant time X25519 exchange 2022-06-08 15:52:37 +02:00
Mark Qvist 06fffe5a94 Use internal implementation for X25519 key exchanges 2022-06-08 13:36:23 +02:00
Mark Qvist 7a596882a8 Cleanup 2022-06-08 12:52:42 +02:00
Mark Qvist 76f86f782a Moved Destination Fernet to internal implementation 2022-06-08 12:37:24 +02:00
Mark Qvist 4bd5f05e0e Moved Link Fernet to internal implementation 2022-06-08 12:34:31 +02:00
Mark Qvist 5d3a0efc89 Moved Identity Fernet to internal implementation 2022-06-08 12:29:51 +02:00
Mark Qvist d1a461a2b3 Added multi-backend abstraction for AES-128 CBC primitive 2022-06-08 12:21:50 +02:00
Mark Qvist 0b1e7df31a Added internal Fernet implementation 2022-06-07 17:38:57 +02:00
Mark Qvist 301661c29e Set SHA-256 as default hash for HMAC 2022-06-07 17:33:08 +02:00
Mark Qvist b2b6708e8f Added python-only implementation of PKCS7 padding 2022-06-07 17:32:22 +02:00
Mark Qvist 19a033db96 Freed RNS from dependency on PyCA HMAC, HKDF and hashes 2022-06-07 15:48:23 +02:00
Mark Qvist 5bb510b589 Added internal python-only HKDF 2022-06-07 15:26:45 +02:00
Mark Qvist f1dcda82ac Added internal python-only HMAC implementation 2022-06-07 15:25:41 +02:00
Mark Qvist d24f3a490a Added internal abstraction to SHA-256 2022-06-07 15:21:19 +02:00
Mark Qvist 715a84c6f2 Moved hashing to native python3 hashlib 2022-06-07 12:51:41 +02:00
Mark Qvist 379e56b2ce Socket option check for OpenWRT compatibility 2022-06-07 12:40:50 +02:00
Mark Qvist c6df6293b2 Added hardware MTU parameter to interfaces 2022-05-29 15:43:50 +02:00
Mark Qvist d99d31097b Updated manual 2022-05-29 10:14:31 +02:00
Mark Qvist 54488cfeb5 Updated documentation 2022-05-29 10:13:25 +02:00
Mark Qvist d7e38d646e Updated readme 2022-05-29 09:48:53 +02:00
Mark Qvist b9057bee5f Updated readme 2022-05-29 09:48:31 +02:00
Mark Qvist 9bd64834ec Updated readme 2022-05-29 09:47:26 +02:00
Mark Qvist 9e20ba2dac Implemented I2PInterface recovery on I2P router restart 2022-05-28 02:24:01 +02:00
Mark Qvist 49ed335e19 Cleanup 2022-05-26 16:52:28 +02:00
Mark Qvist 85c71b0b7b Updated docs 2022-05-26 16:50:35 +02:00
Mark Qvist 33fac728f8 Improved link stale process and timeout calculations 2022-05-26 16:49:02 +02:00
Mark Qvist 49616a36cf Fixed I2P controller startup when event loop is not immediately ready 2022-05-26 09:54:56 +02:00
Mark Qvist 1e77f85cd4 Fixed rnx version output 2022-05-26 00:03:37 +02:00
Mark Qvist 9e316ab989 Fixed deprecated options in asyncio API for Python 3.10. Fixes #58. 2022-05-25 23:11:01 +02:00
Mark Qvist 94749e0dde Updated default configs 2022-05-25 23:10:05 +02:00
Mark Qvist a6dbc53209 Improved status display for I2P interfaces 2022-05-25 21:44:49 +02:00
Mark Qvist 3af5a8f3ed Improved I2P server tunnel error handling. Fixes #13. 2022-05-25 21:23:52 +02:00
Mark Qvist fb5172ff10 Improved I2P client tunnel error handling 2022-05-25 20:18:06 +02:00
Mark Qvist 24d6de8490 Updated docs 2022-05-25 15:51:20 +02:00
Mark Qvist d3ab0878e0 Improved I2P interface display in rnstatus 2022-05-25 15:50:54 +02:00
Mark Qvist 7848b7e396 Fixed invalid reference in rnx 2022-05-25 15:08:45 +02:00
Mark Qvist fc80dd2614 Improved rnstatus output 2022-05-25 14:21:04 +02:00
Mark Qvist e00a758b2a Updated readme 2022-05-24 21:00:46 +02:00
Mark Qvist d44ec745df Updated readme 2022-05-24 21:00:25 +02:00
Mark Qvist 7573ac1970 Updated readme 2022-05-24 20:58:58 +02:00
Mark Qvist 88390f0cbc Updated readme 2022-05-24 20:57:56 +02:00
Mark Qvist 3b8490ae9c Added rnx util to documentation 2022-05-24 20:47:45 +02:00
Mark Qvist 417ac9f8da Added rnx remote command utility 2022-05-24 20:14:43 +02:00
Mark Qvist fe5e74bc2b Improved rncp arguments 2022-05-24 20:13:54 +02:00
Mark Qvist 30f71857ae Added docstrings. Added request size to receipts. Fixed link stale time calculation on newly created links with no actual activity. 2022-05-24 20:13:11 +02:00
Mark Qvist c24233845e Implemented bandwidth cap for recursive path requests 2022-05-23 19:49:48 +02:00
Mark Qvist c0fbde5ad1 Added recursive path request loop avoidance 2022-05-23 18:14:45 +02:00
Mark Qvist 5da66402dd Fixed rncp output 2022-05-23 09:23:37 +02:00
Mark Qvist 3bf5694238 Fixed naming conflict in resource advertisements 2022-05-23 08:54:07 +02:00
Mark Qvist 9e6a5d5d91 Fix announce rate targets on I2PInterface peers 2022-05-23 00:28:06 +02:00
Mark Qvist cf3e47f469 Fixed interface mode inheritance 2022-05-23 00:06:26 +02:00
Mark Qvist f8db5a545d Fixed interface mode check 2022-05-23 00:00:14 +02:00
Mark Qvist a79f6e7efa Added rncp utility 2022-05-22 23:44:32 +02:00
Mark Qvist ac4606bcf7 Updated docs 2022-05-22 23:44:12 +02:00
Mark Qvist d1cb07356c Fixed missing recursive progress callback allocation in segmented resource transfer 2022-05-22 21:05:07 +02:00
Mark Qvist e811d54d0f Fixed bug in conditional resource acceptance callback 2022-05-22 19:09:44 +02:00
Mark Qvist 49c8ada478 Added standard identity storage folder 2022-05-22 19:09:16 +02:00
Mark Qvist 6ea7d78b31 Updated API reference 2022-05-22 19:08:32 +02:00
Mark Qvist 0ace84367b Improved link authentication callback 2022-05-22 19:08:03 +02:00
Mark Qvist e63e6821e0 Updated Destination docstrings 2022-05-22 17:11:30 +02:00
Mark Qvist 109132e09d Fixed expired AP and roaming paths not being removed at correct time. 2022-05-22 15:43:46 +02:00
Mark Qvist efd24ec134 Updated documentation 2022-05-22 15:18:09 +02:00
Mark Qvist eefa37f808 Updated documentation 2022-05-22 15:17:54 +02:00
Mark Qvist e4871f7667 Updated documentation 2022-05-22 15:17:25 +02:00
Mark Qvist 44ba5624bc Added gateway mode to rnstatus 2022-05-22 15:16:58 +02:00
Mark Qvist e9c5e3c189 Version bump 2022-05-22 14:29:29 +02:00
Mark Qvist f3ff71d9b8 Implemented unknown path discovery 2022-05-22 14:18:58 +02:00
Mark Qvist 81b92ffdc1 Added gateway interface mode 2022-05-22 11:14:33 +02:00
Mark Qvist 02bb9068cc Updated readme 2022-05-22 11:13:54 +02:00
Mark Qvist ecc9e84bc2 Fixed typo 2022-05-18 00:47:29 +02:00
202 changed files with 33802 additions and 6057 deletions

11
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: ✨ Feature Request or Idea
url: https://github.com/markqvist/Reticulum/discussions/new?category=ideas
about: Propose and discuss features and ideas
- name: 💬 Questions, Help & Discussion
about: Ask anything, or get help
url: https://github.com/markqvist/Reticulum/discussions/new/choose
- name: 📖 Read the Reticulum Manual
url: https://markqvist.github.io/Reticulum/manual/
about: The complete documentation for Reticulum

View File

@ -0,0 +1,35 @@
---
name: "\U0001F41B Bug Report"
about: Report a reproducible bug
title: ''
labels: ''
assignees: ''
---
**Read the Contribution Guidelines**
Before creating a bug report on this issue tracker, you **must** read the [Contribution Guidelines](https://github.com/markqvist/Reticulum/blob/master/Contributing.md). Issues that do not follow the contribution guidelines **will be deleted without comment**.
- The issue tracker is used by developers of this project. **Do not use it to ask general questions, or for support requests**.
- Ideas and feature requests can be made on the [Discussions](https://github.com/markqvist/Reticulum/discussions). **Only** feature requests accepted by maintainers and developers are tracked and included on the issue tracker. **Do not post feature requests here**.
- After reading the [Contribution Guidelines](https://github.com/markqvist/Reticulum/blob/master/Contributing.md), delete this section from your bug report.
**Describe the Bug**
A clear and concise description of what the bug is.
**To Reproduce**
Describe in detail how to reproduce the bug.
**Expected Behavior**
A clear and concise description of what you expected to happen.
**Logs & Screenshots**
Please include any relevant log output. If applicable, also add screenshots to help explain your problem.
**System Information**
- OS and version
- Python version
- Program version
**Additional context**
Add any other context about the problem here.

7
.gitignore vendored
View File

@ -7,4 +7,9 @@ RNS/Utilities/RNS
build
dist
docs/build
rns*.egg-info
rns*.egg-info
profile.data
tests/rnsconfig/storage
tests/rnsconfig/logfile*
*.data
*.result

1167
Changelog.md Normal file

File diff suppressed because it is too large Load Diff

43
Contributing.md Normal file
View File

@ -0,0 +1,43 @@
# Contributing to Reticulum
Welcome, and thank you for your interest in contributing to Reticulum!
Apart from writing code, there are many ways in which you can contribute. Before interacting with this community, read these short and simple guidelines.
## Expected Conduct
First and foremost, there is one simple requirement for taking part in this community: While we primarily interact virtually, your actions matter and have real consequences. Therefore: **Act like a responsible, civilized person** - also in the face of disputes and heated disagreements. Speak your mind here, discussions are welcome. Just do so in the spirit of being face-to-face with everyone else. Thank you.
## Asking Questions
If you want to ask a question, **do not open an issue**. The issue tracker is used by people *working on Reticulum* to track bugs, issues and improvements.
Instead, ask away on the [discussions](https://github.com/markqvist/Reticulum/discussions) or on the [Reticulum Matrix channel](https://matrix.to/#/#reticulum:matrix.org) at `#reticulum:matrix.org`
## Providing Feedback & Ideas
Likewise, feedback, ideas and feature requests are a very welcome way to contribute, and should also be posted on the [discussions](https://github.com/markqvist/Reticulum/discussions), or on the [Reticulum Matrix channel](https://matrix.to/#/#reticulum:matrix.org) at `#reticulum:matrix.org`.
Please do not post feature requests or general ideas on the issue tracker, or in direct messages to the primary developers. You are much more likely to get a response and start a constructive discussion by posting your ideas in the public channels created for these purposes.
## Reporting Issues
If you have found a bug or issue in this project, please report it using the [issue tracker](https://github.com/markqvist/Reticulum/issues). If at all possible, be sure to include details on how to reproduce the bug.
Anything submitted to the issue tracker that does not follow these guidelines will be closed and removed without comments or explanation.
## Writing Code
If you are interested in contributing code, fixing open issues or adding features, please coordinate the effort with the maintainer or one of the main developers **before** submitting a pull request. Before deciding to contribute, it is also a good idea to ensure your efforts are in alignment with the [Roadmap](./Roadmap.md) and current development focus.
Pull requests have a high chance of being accepted if they are:
- In alignment with the [Roadmap](./Roadmap.md) or solve an open issue or feature request
- Sufficiently tested to work with all API functions, and pass the standard test suite
- Functionally and conceptually complete and well-designed
- Not simply formatting or code style changes
- Well-documented
Even new ideas and proposals that have not been approved by a maintainer, or fall outside the established roadmap, are *occasionally* accepted - if they possess the remaining of the above qualities. If not, they will be closed and removed without comments or explanation.
By contributing code to this project, you agree that copyright for the code is transferred to the Reticulum maintainers and that the code is irrevocably placed under the [MIT license](./LICENSE).

View File

@ -32,7 +32,7 @@ def program_setup(configpath):
# Destinations are endpoints in Reticulum, that can be addressed
# and communicated with. Destinations can also announce their
# existence, which will let the network know they are reachable
# and autoomatically create paths to them, from anywhere else
# and automatically create paths to them, from anywhere else
# in the network.
destination_1 = RNS.Destination(
identity,
@ -53,7 +53,7 @@ def program_setup(configpath):
)
# We configure the destinations to automatically prove all
# packets adressed to it. By doing this, RNS will automatically
# packets addressed to it. By doing this, RNS will automatically
# generate a proof for each incoming packet and transmit it
# back to the sender of that packet. This will let anyone that
# tries to communicate with the destination know whether their
@ -130,10 +130,11 @@ class ExampleAnnounceHandler:
RNS.prettyhexrep(destination_hash)
)
RNS.log(
"The announce contained the following app data: "+
app_data.decode("utf-8")
)
if app_data:
RNS.log(
"The announce contained the following app data: "+
app_data.decode("utf-8")
)
##########################################################
#### Program Startup #####################################

323
Examples/Buffer.py Normal file
View File

@ -0,0 +1,323 @@
##########################################################
# This RNS example demonstrates how to set up a link to #
# a destination, and pass binary data over it using a #
# channel buffer. #
##########################################################
from __future__ import annotations
import os
import sys
import time
import argparse
from datetime import datetime
import RNS
from RNS.vendor import umsgpack
# Let's define an app name. We'll use this for all
# destinations we create. Since this echo example
# is part of a range of example utilities, we'll put
# them all within the app namespace "example_utilities"
APP_NAME = "example_utilities"
##########################################################
#### Server Part #########################################
##########################################################
# A reference to the latest client link that connected
latest_client_link = None
# A reference to the latest buffer object
latest_buffer = None
# This initialisation is executed when the users chooses
# to run as a server
def server(configpath):
# We must first initialise Reticulum
reticulum = RNS.Reticulum(configpath)
# Randomly create a new identity for our example
server_identity = RNS.Identity()
# We create a destination that clients can connect to. We
# want clients to create links to this destination, so we
# need to create a "single" destination type.
server_destination = RNS.Destination(
server_identity,
RNS.Destination.IN,
RNS.Destination.SINGLE,
APP_NAME,
"bufferexample"
)
# We configure a function that will get called every time
# a new client creates a link to this destination.
server_destination.set_link_established_callback(client_connected)
# Everything's ready!
# Let's Wait for client requests or user input
server_loop(server_destination)
def server_loop(destination):
# Let the user know that everything is ready
RNS.log(
"Link buffer example "+
RNS.prettyhexrep(destination.hash)+
" running, waiting for a connection."
)
RNS.log("Hit enter to manually send an announce (Ctrl-C to quit)")
# We enter a loop that runs until the users exits.
# If the user hits enter, we will announce our server
# destination on the network, which will let clients
# know how to create messages directed towards it.
while True:
entered = input()
destination.announce()
RNS.log("Sent announce from "+RNS.prettyhexrep(destination.hash))
# When a client establishes a link to our server
# destination, this function will be called with
# a reference to the link.
def client_connected(link):
global latest_client_link, latest_buffer
latest_client_link = link
RNS.log("Client connected")
link.set_link_closed_callback(client_disconnected)
# If a new connection is received, the old reader
# needs to be disconnected.
if latest_buffer:
latest_buffer.close()
# Create buffer objects.
# The stream_id parameter to these functions is
# a bit like a file descriptor, except that it
# is unique to the *receiver*.
#
# In this example, both the reader and the writer
# use stream_id = 0, but there are actually two
# separate unidirectional streams flowing in
# opposite directions.
#
channel = link.get_channel()
latest_buffer = RNS.Buffer.create_bidirectional_buffer(0, 0, channel, server_buffer_ready)
def client_disconnected(link):
RNS.log("Client disconnected")
def server_buffer_ready(ready_bytes: int):
"""
Callback from buffer when buffer has data available
:param ready_bytes: The number of bytes ready to read
"""
global latest_buffer
data = latest_buffer.read(ready_bytes)
data = data.decode("utf-8")
RNS.log("Received data over the buffer: " + data)
reply_message = "I received \""+data+"\" over the buffer"
reply_message = reply_message.encode("utf-8")
latest_buffer.write(reply_message)
latest_buffer.flush()
##########################################################
#### Client Part #########################################
##########################################################
# A reference to the server link
server_link = None
# A reference to the buffer object, needed to share the
# object from the link connected callback to the client
# loop.
buffer = None
# This initialisation is executed when the users chooses
# to run as a client
def client(destination_hexhash, configpath):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")
exit()
# We must first initialise Reticulum
reticulum = RNS.Reticulum(configpath)
# Check if we know a path to the destination
if not RNS.Transport.has_path(destination_hash):
RNS.log("Destination is not yet known. Requesting path and waiting for announce to arrive...")
RNS.Transport.request_path(destination_hash)
while not RNS.Transport.has_path(destination_hash):
time.sleep(0.1)
# Recall the server identity
server_identity = RNS.Identity.recall(destination_hash)
# Inform the user that we'll begin connecting
RNS.log("Establishing link with server...")
# When the server identity is known, we set
# up a destination
server_destination = RNS.Destination(
server_identity,
RNS.Destination.OUT,
RNS.Destination.SINGLE,
APP_NAME,
"bufferexample"
)
# And create a link
link = RNS.Link(server_destination)
# We'll also set up functions to inform the
# user when the link is established or closed
link.set_link_established_callback(link_established)
link.set_link_closed_callback(link_closed)
# Everything is set up, so let's enter a loop
# for the user to interact with the example
client_loop()
def client_loop():
global server_link
# Wait for the link to become active
while not server_link:
time.sleep(0.1)
should_quit = False
while not should_quit:
try:
print("> ", end=" ")
text = input()
# Check if we should quit the example
if text == "quit" or text == "q" or text == "exit":
should_quit = True
server_link.teardown()
else:
# Otherwise, encode the text and write it to the buffer.
text = text.encode("utf-8")
buffer.write(text)
# Flush the buffer to force the data to be sent.
buffer.flush()
except Exception as e:
RNS.log("Error while sending data over the link buffer: "+str(e))
should_quit = True
server_link.teardown()
# This function is called when a link
# has been established with the server
def link_established(link):
# We store a reference to the link
# instance for later use
global server_link, buffer
server_link = link
# Create buffer, see server_client_connected() for
# more detail about setting up the buffer.
channel = link.get_channel()
buffer = RNS.Buffer.create_bidirectional_buffer(0, 0, channel, client_buffer_ready)
# Inform the user that the server is
# connected
RNS.log("Link established with server, enter some text to send, or \"quit\" to quit")
# When a link is closed, we'll inform the
# user, and exit the program
def link_closed(link):
if link.teardown_reason == RNS.Link.TIMEOUT:
RNS.log("The link timed out, exiting now")
elif link.teardown_reason == RNS.Link.DESTINATION_CLOSED:
RNS.log("The link was closed by the server, exiting now")
else:
RNS.log("Link closed, exiting now")
RNS.Reticulum.exit_handler()
time.sleep(1.5)
os._exit(0)
# When the buffer has new data, read it and write it to the terminal.
def client_buffer_ready(ready_bytes: int):
global buffer
data = buffer.read(ready_bytes)
RNS.log("Received data over the link buffer: " + data.decode("utf-8"))
print("> ", end=" ")
sys.stdout.flush()
##########################################################
#### Program Startup #####################################
##########################################################
# This part of the program runs at startup,
# and parses input of from the user, and then
# starts up the desired program mode.
if __name__ == "__main__":
try:
parser = argparse.ArgumentParser(description="Simple buffer example")
parser.add_argument(
"-s",
"--server",
action="store_true",
help="wait for incoming link requests from clients"
)
parser.add_argument(
"--config",
action="store",
default=None,
help="path to alternative Reticulum config directory",
type=str
)
parser.add_argument(
"destination",
nargs="?",
default=None,
help="hexadecimal hash of the server destination",
type=str
)
args = parser.parse_args()
if args.config:
configarg = args.config
else:
configarg = None
if args.server:
server(configarg)
else:
if (args.destination == None):
print("")
parser.print_help()
print("")
else:
client(args.destination, configarg)
except KeyboardInterrupt:
print("")
exit()

390
Examples/Channel.py Normal file
View File

@ -0,0 +1,390 @@
##########################################################
# This RNS example demonstrates how to set up a link to #
# a destination, and pass structured messages over it #
# using a channel. #
##########################################################
import os
import sys
import time
import argparse
from datetime import datetime
import RNS
from RNS.vendor import umsgpack
# Let's define an app name. We'll use this for all
# destinations we create. Since this echo example
# is part of a range of example utilities, we'll put
# them all within the app namespace "example_utilities"
APP_NAME = "example_utilities"
##########################################################
#### Shared Objects ######################################
##########################################################
# Channel data must be structured in a subclass of
# MessageBase. This ensures that the channel will be able
# to serialize and deserialize the object and multiplex it
# with other objects. Both ends of a link will need the
# same object definitions to be able to communicate over
# a channel.
#
# Note: The objects we wish to use over the channel must
# be registered with the channel, and each link has a
# different channel instance. See the client_connected
# and link_established functions in this example to see
# how message types are registered.
# Let's make a simple message class called StringMessage
# that will convey a string with a timestamp.
class StringMessage(RNS.MessageBase):
# The MSGTYPE class variable needs to be assigned a
# 2 byte integer value. This identifier allows the
# channel to look up your message's constructor when a
# message arrives over the channel.
#
# MSGTYPE must be unique across all message types we
# register with the channel. MSGTYPEs >= 0xf000 are
# reserved for the system.
MSGTYPE = 0x0101
# The constructor of our object must be callable with
# no arguments. We can have parameters, but they must
# have a default assignment.
#
# This is needed so the channel can create an empty
# version of our message into which the incoming
# message can be unpacked.
def __init__(self, data=None):
self.data = data
self.timestamp = datetime.now()
# Finally, our message needs to implement functions
# the channel can call to pack and unpack our message
# to/from the raw packet payload. We'll use the
# umsgpack package bundled with RNS. We could also use
# the struct package bundled with Python if we wanted
# more control over the structure of the packed bytes.
#
# Also note that packed message objects must fit
# entirely in one packet. The number of bytes
# available for message payloads can be queried from
# the channel using the Channel.MDU property. The
# channel MDU is slightly less than the link MDU due
# to encoding the message header.
# The pack function encodes the message contents into
# a byte stream.
def pack(self) -> bytes:
return umsgpack.packb((self.data, self.timestamp))
# And the unpack function decodes a byte stream into
# the message contents.
def unpack(self, raw):
self.data, self.timestamp = umsgpack.unpackb(raw)
##########################################################
#### Server Part #########################################
##########################################################
# A reference to the latest client link that connected
latest_client_link = None
# This initialisation is executed when the users chooses
# to run as a server
def server(configpath):
# We must first initialise Reticulum
reticulum = RNS.Reticulum(configpath)
# Randomly create a new identity for our link example
server_identity = RNS.Identity()
# We create a destination that clients can connect to. We
# want clients to create links to this destination, so we
# need to create a "single" destination type.
server_destination = RNS.Destination(
server_identity,
RNS.Destination.IN,
RNS.Destination.SINGLE,
APP_NAME,
"channelexample"
)
# We configure a function that will get called every time
# a new client creates a link to this destination.
server_destination.set_link_established_callback(client_connected)
# Everything's ready!
# Let's Wait for client requests or user input
server_loop(server_destination)
def server_loop(destination):
# Let the user know that everything is ready
RNS.log(
"Link example "+
RNS.prettyhexrep(destination.hash)+
" running, waiting for a connection."
)
RNS.log("Hit enter to manually send an announce (Ctrl-C to quit)")
# We enter a loop that runs until the users exits.
# If the user hits enter, we will announce our server
# destination on the network, which will let clients
# know how to create messages directed towards it.
while True:
entered = input()
destination.announce()
RNS.log("Sent announce from "+RNS.prettyhexrep(destination.hash))
# When a client establishes a link to our server
# destination, this function will be called with
# a reference to the link.
def client_connected(link):
global latest_client_link
latest_client_link = link
RNS.log("Client connected")
link.set_link_closed_callback(client_disconnected)
# Register message types and add callback to channel
channel = link.get_channel()
channel.register_message_type(StringMessage)
channel.add_message_handler(server_message_received)
def client_disconnected(link):
RNS.log("Client disconnected")
def server_message_received(message):
"""
A message handler
@param message: An instance of a subclass of MessageBase
@return: True if message was handled
"""
global latest_client_link
# When a message is received over any active link,
# the replies will all be directed to the last client
# that connected.
# In a message handler, any deserializable message
# that arrives over the link's channel will be passed
# to all message handlers, unless a preceding handler indicates it
# has handled the message.
#
#
if isinstance(message, StringMessage):
RNS.log("Received data on the link: " + message.data + " (message created at " + str(message.timestamp) + ")")
reply_message = StringMessage("I received \""+message.data+"\" over the link")
latest_client_link.get_channel().send(reply_message)
# Incoming messages are sent to each message
# handler added to the channel, in the order they
# were added.
# If any message handler returns True, the message
# is considered handled and any subsequent
# handlers are skipped.
return True
##########################################################
#### Client Part #########################################
##########################################################
# A reference to the server link
server_link = None
# This initialisation is executed when the users chooses
# to run as a client
def client(destination_hexhash, configpath):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")
exit()
# We must first initialise Reticulum
reticulum = RNS.Reticulum(configpath)
# Check if we know a path to the destination
if not RNS.Transport.has_path(destination_hash):
RNS.log("Destination is not yet known. Requesting path and waiting for announce to arrive...")
RNS.Transport.request_path(destination_hash)
while not RNS.Transport.has_path(destination_hash):
time.sleep(0.1)
# Recall the server identity
server_identity = RNS.Identity.recall(destination_hash)
# Inform the user that we'll begin connecting
RNS.log("Establishing link with server...")
# When the server identity is known, we set
# up a destination
server_destination = RNS.Destination(
server_identity,
RNS.Destination.OUT,
RNS.Destination.SINGLE,
APP_NAME,
"channelexample"
)
# And create a link
link = RNS.Link(server_destination)
# We'll also set up functions to inform the
# user when the link is established or closed
link.set_link_established_callback(link_established)
link.set_link_closed_callback(link_closed)
# Everything is set up, so let's enter a loop
# for the user to interact with the example
client_loop()
def client_loop():
global server_link
# Wait for the link to become active
while not server_link:
time.sleep(0.1)
should_quit = False
while not should_quit:
try:
print("> ", end=" ")
text = input()
# Check if we should quit the example
if text == "quit" or text == "q" or text == "exit":
should_quit = True
server_link.teardown()
# If not, send the entered text over the link
if text != "":
message = StringMessage(text)
packed_size = len(message.pack())
channel = server_link.get_channel()
if channel.is_ready_to_send():
if packed_size <= channel.MDU:
channel.send(message)
else:
RNS.log(
"Cannot send this packet, the data size of "+
str(packed_size)+" bytes exceeds the link packet MDU of "+
str(channel.MDU)+" bytes",
RNS.LOG_ERROR
)
else:
RNS.log("Channel is not ready to send, please wait for " +
"pending messages to complete.", RNS.LOG_ERROR)
except Exception as e:
RNS.log("Error while sending data over the link: "+str(e))
should_quit = True
server_link.teardown()
# This function is called when a link
# has been established with the server
def link_established(link):
# We store a reference to the link
# instance for later use
global server_link
server_link = link
# Register messages and add handler to channel
channel = link.get_channel()
channel.register_message_type(StringMessage)
channel.add_message_handler(client_message_received)
# Inform the user that the server is
# connected
RNS.log("Link established with server, enter some text to send, or \"quit\" to quit")
# When a link is closed, we'll inform the
# user, and exit the program
def link_closed(link):
if link.teardown_reason == RNS.Link.TIMEOUT:
RNS.log("The link timed out, exiting now")
elif link.teardown_reason == RNS.Link.DESTINATION_CLOSED:
RNS.log("The link was closed by the server, exiting now")
else:
RNS.log("Link closed, exiting now")
RNS.Reticulum.exit_handler()
time.sleep(1.5)
os._exit(0)
# When a packet is received over the channel, we
# simply print out the data.
def client_message_received(message):
if isinstance(message, StringMessage):
RNS.log("Received data on the link: " + message.data + " (message created at " + str(message.timestamp) + ")")
print("> ", end=" ")
sys.stdout.flush()
##########################################################
#### Program Startup #####################################
##########################################################
# This part of the program runs at startup,
# and parses input of from the user, and then
# starts up the desired program mode.
if __name__ == "__main__":
try:
parser = argparse.ArgumentParser(description="Simple channel example")
parser.add_argument(
"-s",
"--server",
action="store_true",
help="wait for incoming link requests from clients"
)
parser.add_argument(
"--config",
action="store",
default=None,
help="path to alternative Reticulum config directory",
type=str
)
parser.add_argument(
"destination",
nargs="?",
default=None,
help="hexadecimal hash of the server destination",
type=str
)
args = parser.parse_args()
if args.config:
configarg = args.config
else:
configarg = None
if args.server:
server(configarg)
else:
if (args.destination == None):
print("")
parser.print_help()
print("")
else:
client(args.destination, configarg)
except KeyboardInterrupt:
print("")
exit()

View File

@ -5,6 +5,7 @@
# of the packet. #
##########################################################
import os
import argparse
import RNS
@ -27,8 +28,19 @@ def server(configpath):
# We must first initialise Reticulum
reticulum = RNS.Reticulum(configpath)
# Randomly create a new identity for our echo server
server_identity = RNS.Identity()
# Load identity from file if it exist or randomley create
if configpath:
ifilepath = "%s/storage/identitiesy/%s" % (configpath,APP_NAME)
else:
ifilepath = "%s/storage/identities/%s" % (RNS.Reticulum.configdir,APP_NAME)
if os.path.exists(ifilepath):
# Load identity from file
server_identity = RNS.Identity.from_file(ifilepath)
RNS.log("loaded identity from file: "+ifilepath, RNS.LOG_VERBOSE)
else:
# Randomly create a new identity for our echo example
server_identity = RNS.Identity()
RNS.log("created new identity", RNS.LOG_VERBOSE)
# We create a destination that clients can query. We want
# to be able to verify echo replies to our clients, so we
@ -46,7 +58,7 @@ def server(configpath):
)
# We configure the destination to automatically prove all
# packets adressed to it. By doing this, RNS will automatically
# packets addressed to it. By doing this, RNS will automatically
# generate a proof for each incoming packet and transmit it
# back to the sender of that packet.
echo_destination.set_proof_strategy(RNS.Destination.PROVE_ALL)
@ -120,14 +132,16 @@ def client(destination_hexhash, configpath, timeout=None):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
if len(destination_hexhash) != 20:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be 20 hexadecimal characters (10 bytes)"
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")
except Exception as e:
RNS.log("Invalid destination entered. Check your input!")
RNS.log(str(e)+"\n")
exit()
# We must first initialise Reticulum
@ -208,6 +222,7 @@ def client(destination_hexhash, configpath, timeout=None):
# If we do not know this destination, tell the
# user to wait for an announce to arrive.
RNS.log("Destination is not yet known. Requesting path...")
RNS.log("Hit enter to manually retry once an announce is received.")
RNS.Transport.request_path(destination_hash)
# This function is called when our reply destination
@ -325,4 +340,4 @@ if __name__ == "__main__":
client(args.destination, configarg, timeout=timeoutarg)
except KeyboardInterrupt:
print("")
exit()
exit()

View File

@ -215,8 +215,12 @@ def client(destination_hexhash, configpath):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
if len(destination_hexhash) != 20:
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")
@ -445,8 +449,7 @@ def link_established(link):
# And set up a small job to check for
# a potential timeout in receiving the
# file list
thread = threading.Thread(target=filelist_timeout_job)
thread.setDaemon(True)
thread = threading.Thread(target=filelist_timeout_job, daemon=True)
thread.start()
# This job just sleeps for the specified

View File

@ -84,7 +84,7 @@ def client_connected(link):
def client_disconnected(link):
RNS.log("Client disconnected")
def remote_identified(identity):
def remote_identified(link, identity):
RNS.log("Remote identified as: "+str(identity))
def server_packet_received(message, packet):
@ -124,8 +124,12 @@ def client(destination_hexhash, configpath):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
if len(destination_hexhash) != 20:
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")

View File

@ -28,8 +28,20 @@ def server(configpath):
# We must first initialise Reticulum
reticulum = RNS.Reticulum(configpath)
# Randomly create a new identity for our link example
server_identity = RNS.Identity()
# Load identity from file if it exist or randomley create
if configpath:
ifilepath = "%s/storage/identitiesy/%s" % (configpath,APP_NAME)
else:
ifilepath = "%s/storage/identities/%s" % (RNS.Reticulum.configdir,APP_NAME)
RNS.log("ifilepath: %s" % ifilepath)
if os.path.exists(ifilepath):
# Load identity from file
server_identity = RNS.Identity.from_file(ifilepath)
RNS.log("loaded identity from file: "+ifilepath, RNS.LOG_VERBOSE)
else:
# Randomly create a new identity for our link example
server_identity = RNS.Identity()
RNS.log("created new identity", RNS.LOG_VERBOSE)
# We create a destination that clients can connect to. We
# want clients to create links to this destination, so we
@ -110,8 +122,12 @@ def client(destination_hexhash, configpath):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
if len(destination_hexhash) != 20:
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")
@ -284,4 +300,4 @@ if __name__ == "__main__":
except KeyboardInterrupt:
print("")
exit()
exit()

View File

@ -25,7 +25,7 @@ def program_setup(configpath):
# Destinations are endpoints in Reticulum, that can be addressed
# and communicated with. Destinations can also announce their
# existence, which will let the network know they are reachable
# and autoomatically create paths to them, from anywhere else
# and automatically create paths to them, from anywhere else
# in the network.
destination = RNS.Destination(
identity,
@ -36,7 +36,7 @@ def program_setup(configpath):
)
# We configure the destination to automatically prove all
# packets adressed to it. By doing this, RNS will automatically
# packets addressed to it. By doing this, RNS will automatically
# generate a proof for each incoming packet and transmit it
# back to the sender of that packet. This will let anyone that
# tries to communicate with the destination know whether their

View File

@ -23,8 +23,8 @@ APP_NAME = "example_utilities"
# A reference to the latest client link that connected
latest_client_link = None
def random_text_generator(path, data, request_id, remote_identity, requested_at):
RNS.log("Generating response to request "+RNS.prettyhexrep(request_id))
def random_text_generator(path, data, request_id, link_id, remote_identity, requested_at):
RNS.log("Generating response to request "+RNS.prettyhexrep(request_id)+" on link "+RNS.prettyhexrep(link_id))
texts = ["They looked up", "On each full moon", "Becky was upset", "Ill stay away from it", "The pet shop stocks everything"]
return texts[random.randint(0, len(texts)-1)]
@ -110,8 +110,12 @@ def client(destination_hexhash, configpath):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
if len(destination_hexhash) != 20:
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")

View File

@ -166,8 +166,12 @@ def client(destination_hexhash, configpath):
# We need a binary representation of the destination
# hash that was entered on the command line
try:
if len(destination_hexhash) != 20:
raise ValueError("Destination length is invalid, must be 20 hexadecimal characters (10 bytes)")
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError(
"Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2)
)
destination_hash = bytes.fromhex(destination_hexhash)
except:
RNS.log("Invalid destination entered. Check your input!\n")

2
FUNDING.yml Normal file
View File

@ -0,0 +1,2 @@
ko_fi: markqvist
custom: "https://unsigned.io/donate"

View File

@ -1,6 +1,6 @@
MIT License, unless otherwise noted
Copyright (c) 2016-2022 Mark Qvist / unsigned.io
Copyright (c) 2016-2024 Mark Qvist / unsigned.io
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@ -1,9 +1,28 @@
all: release
test:
@echo Running tests...
python -m tests.all
clean:
@echo Cleaning...
-rm -r ./build
-rm -r ./dist
@-rm -rf ./build
@-rm -rf ./dist
@-rm -rf ./*.data
@-rm -rf ./__pycache__
@-rm -rf ./RNS/__pycache__
@-rm -rf ./RNS/Cryptography/__pycache__
@-rm -rf ./RNS/Cryptography/aes/__pycache__
@-rm -rf ./RNS/Cryptography/pure25519/__pycache__
@-rm -rf ./RNS/Interfaces/__pycache__
@-rm -rf ./RNS/Utilities/__pycache__
@-rm -rf ./RNS/vendor/__pycache__
@-rm -rf ./RNS/vendor/i2plib/__pycache__
@-rm -rf ./tests/__pycache__
@-rm -rf ./tests/rnsconfig/storage
@-rm -rf ./*.egg-info
@make -C docs clean
@echo Done
remove_symlinks:
@echo Removing symlinks for build...
@ -15,11 +34,28 @@ create_symlinks:
-ln -s ../RNS ./Examples/
-ln -s ../../RNS ./RNS/Utilities/
build_sdist_only:
python3 setup.py sdist
build_wheel:
python3 setup.py sdist bdist_wheel
release: remove_symlinks build_wheel create_symlinks
build_pure_wheel:
python3 setup.py sdist bdist_wheel --pure
documentation:
make -C docs html
manual:
make -C docs latexpdf epub
release: test remove_symlinks build_wheel build_pure_wheel documentation manual create_symlinks
debug: remove_symlinks build_wheel build_pure_wheel create_symlinks
upload:
@echo Ready to publish release, hit enter to continue
@read VOID
@echo Uploading to PyPi...
twine upload dist/*
@echo Release published

357
README.md
View File

@ -1,32 +1,51 @@
Reticulum Network Stack β
Reticulum Network Stack β <img align="right" src="https://static.pepy.tech/personalized-badge/rns?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Installs"/>
==========
<p align="center"><img width="200" src="https://unsigned.io/wp-content/uploads/2022/03/reticulum_logo_512.png"></p>
<p align="center"><img width="200" src="https://raw.githubusercontent.com/markqvist/Reticulum/master/docs/source/graphics/rns_logo_512.png"></p>
Reticulum is the cryptography-based networking stack for wide-area networks built on readily available hardware. It can operate even with very high latency and extremely low bandwidth. Reticulum allows you to build wide-area networks with off-the-shelf tools, and offers end-to-end encryption and connectivity, initiator anonymity, autoconfiguring cryptographically backed multi-hop transport, efficient addressing, unforgeable delivery acknowledgements and more.
Reticulum is the cryptography-based networking stack for building local and wide-area
networks with readily available hardware. It can operate even with very high latency
and extremely low bandwidth. Reticulum allows you to build wide-area networks
with off-the-shelf tools, and offers end-to-end encryption and connectivity,
initiator anonymity, autoconfiguring cryptographically backed multi-hop
transport, efficient addressing, unforgeable delivery acknowledgements and
more.
The vision of Reticulum is to allow anyone to be their own network operator, and to make it cheap and easy to cover vast areas with a myriad of independent, interconnectable and autonomous networks. Reticulum **is not** *one network*, it **is a tool** for building *thousands of networks*. Networks without kill-switches, surveillance, censorship and control. Networks that can freely interoperate, associate and disassociate with each other, and require no central oversight. Networks for human beings. *Networks for the people*.
The vision of Reticulum is to allow anyone to be their own network operator,
and to make it cheap and easy to cover vast areas with a myriad of independent,
inter-connectable and autonomous networks. Reticulum **is not** *one* network.
It is **a tool** for building *thousands of networks*. Networks without
kill-switches, surveillance, censorship and control. Networks that can freely
interoperate, associate and disassociate with each other, and require no
central oversight. Networks for human beings. *Networks for the people*.
Reticulum is a complete networking stack, and does not need IP or higher layers, although it is easy to use IP (with TCP or UDP) as the underlying carrier for Reticulum. It is therefore trivial to tunnel Reticulum over the Internet or private IP networks.
Reticulum is a complete networking stack, and does not rely on IP or higher
layers, but it is possible to use IP as the underlying carrier for Reticulum.
It is therefore trivial to tunnel Reticulum over the Internet or private IP
networks.
Having no dependencies on traditional networking stacks free up overhead that has been utilised to implement a networking stack built directly on cryptographic principles, allowing resilience and stable functionality in open and trustless networks.
Having no dependencies on traditional networking stacks frees up overhead that
has been used to implement a networking stack built directly on cryptographic
principles, allowing resilience and stable functionality, even in open and
trustless networks.
No kernel modules or drivers are required. Reticulum runs completely in userland, and can run on practically any system that runs Python 3.
No kernel modules or drivers are required. Reticulum runs completely in
userland, and can run on practically any system that runs Python 3.
## Read The Manual
The full documentation for Reticulum is available at [markqvist.github.io/Reticulum/manual/](https://markqvist.github.io/Reticulum/manual/).
You can also [download the Reticulum manual as a PDF](https://github.com/markqvist/Reticulum/raw/master/docs/Reticulum%20Manual.pdf)
You can also download the [Reticulum manual as a PDF](https://github.com/markqvist/Reticulum/raw/master/docs/Reticulum%20Manual.pdf) or [as an e-book in EPUB format](https://github.com/markqvist/Reticulum/raw/master/docs/Reticulum%20Manual.epub).
For more info, see [unsigned.io/projects/reticulum](https://unsigned.io/projects/reticulum/)
For more info, see [reticulum.network](https://reticulum.network/)
## Notable Features
- Coordination-less globally unique adressing and identification
- Coordination-less globally unique addressing and identification
- Fully self-configuring multi-hop routing
- Complete initiator anonymity, communicate without revealing your identity
- Initiator anonymity, communicate without revealing your identity
- Asymmetric X25519 encryption and Ed25519 signatures as a basis for all communication
- Forward Secrecy with ephemereal Elliptic Curve Diffie-Hellman keys on Curve25519
- Reticulum uses the [Fernet](https://github.com/fernet/spec/blob/master/Spec.md) specification for on-the-wire / over-the-air encryption
- Forward Secrecy with ephemeral Elliptic Curve Diffie-Hellman keys on Curve25519
- Reticulum uses the following format for encrypted tokens:
- Keys are ephemeral and derived from an ECDH key exchange on Curve25519
- AES-128 in CBC mode with PKCS7 padding
- HMAC using SHA256 for authentication
@ -34,58 +53,130 @@ For more info, see [unsigned.io/projects/reticulum](https://unsigned.io/projects
- Unforgeable packet delivery confirmations
- A variety of supported interface types
- An intuitive and easy-to-use API
- Reliable and efficient transfer of arbritrary amounts of data
- Reliable and efficient transfer of arbitrary amounts of data
- Reticulum can handle a few bytes of data or files of many gigabytes
- Sequencing, transfer coordination and checksumming is automatic
- Sequencing, transfer coordination and checksumming are automatic
- The API is very easy to use, and provides transfer progress
- Lightweight, flexible and expandable Request/Response mechanism
- Efficient link establishment
- Total bandwidth cost of setting up an encrypted link is 3 packets totalling 237 bytes
- Low cost of keeping links open at only 0.62 bits per second
- Total cost of setting up an encrypted and verified link is only 3 packets, totalling 297 bytes
- Low cost of keeping links open at only 0.44 bits per second
- Reliable sequential delivery with Channel and Buffer mechanisms
## Roadmap
While Reticulum is already a fully featured and functional networking stack,
many improvements and additions are actively being worked on, and planned for the future.
To learn more about the direction and future of Reticulum, please see the [Development Roadmap](./Roadmap.md).
## Examples of Reticulum Applications
If you want to quickly get an idea of what Reticulum can do, take a look at the following resources.
If you want to quickly get an idea of what Reticulum can do, take a look at the
following resources.
- [LXMF](https://github.com/markqvist/lxmf) is a distributed, delay and disruption tolerant message transfer protocol built on Reticulum
- You can use the [rnsh](https://github.com/acehoss/rnsh) program to establish remote shell sessions over Reticulum.
- For an off-grid, encrypted and resilient mesh communications platform, see [Nomad Network](https://github.com/markqvist/NomadNet)
- The Android, Linux and macOS app [Sideband](https://unsigned.io/sideband) has a graphical interface and focuses on ease of use.
- The Android, Linux and macOS app [Sideband](https://github.com/markqvist/Sideband) has a graphical interface and focuses on ease of use.
- [LXMF](https://github.com/markqvist/lxmf) is a distributed, delay and disruption tolerant message transfer protocol built on Reticulum
## Where can Reticulum be used?
Over practically any medium that can support at least a half-duplex channel with 500 bits per second throughput, and an MTU of 500 bytes. Data radios, modems, LoRa radios, serial lines, AX.25 TNCs, amateur radio digital modes, ad-hoc WiFi, free-space optical links and similar systems are all examples of the types of interfaces Reticulum was designed for.
Over practically any medium that can support at least a half-duplex channel
with greater throughput than 5 bits per second, and an MTU of 500 bytes. Data radios,
modems, LoRa radios, serial lines, AX.25 TNCs, amateur radio digital modes,
WiFi and Ethernet devices, free-space optical links, and similar systems are
all examples of the types of physical devices Reticulum can use.
An open-source LoRa-based interface called [RNode](https://unsigned.io/projects/rnode/) has been designed specifically for use with Reticulum. It is possible to build yourself, or it can be purchased as a complete transceiver that just needs a USB connection to the host.
An open-source LoRa-based interface called
[RNode](https://markqvist.github.io/Reticulum/manual/hardware.html#rnode) has
been designed specifically for use with Reticulum. It is possible to build
yourself, or it can be purchased as a complete transceiver that just needs a
USB connection to the host.
Reticulum can also be encapsulated over existing IP networks, so there's nothing stopping you from using it over wired ethernet or your local WiFi network, where it'll work just as well. In fact, one of the strengths of Reticulum is how easily it allows you to connect different mediums into a self-configuring, resilient and encrypted mesh.
Reticulum can also be encapsulated over existing IP networks, so there's
nothing stopping you from using it over wired Ethernet, your local WiFi network
or the Internet, where it'll work just as well. In fact, one of the strengths
of Reticulum is how easily it allows you to connect different mediums into a
self-configuring, resilient and encrypted mesh, using any available mixture of
available infrastructure.
As an example, it's possible to set up a Raspberry Pi connected to both a LoRa radio, a packet radio TNC and a WiFi network. Once the interfaces are configured, Reticulum will take care of the rest, and any device on the WiFi network can communicate with nodes on the LoRa and packet radio sides of the network, and vice versa.
As an example, it's possible to set up a Raspberry Pi connected to both a LoRa
radio, a packet radio TNC and a WiFi network. Once the interfaces are
configured, Reticulum will take care of the rest, and any device on the WiFi
network can communicate with nodes on the LoRa and packet radio sides of the
network, and vice versa.
## How do I get started?
The best way to get started with the Reticulum Network Stack depends on what
you want to do. For full details and examples, have a look at the [Getting Started Fast](https://markqvist.github.io/Reticulum/manual/gettingstartedfast.html) section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
you want to do. For full details and examples, have a look at the
[Getting Started Fast](https://markqvist.github.io/Reticulum/manual/gettingstartedfast.html)
section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
To simply install Reticulum and related utilities on your system, the easiest way is via pip:
To simply install Reticulum and related utilities on your system, the easiest way is via `pip`.
You can then start any program that uses Reticulum, or start Reticulum as a system service with
[the rnsd utility](https://markqvist.github.io/Reticulum/manual/using.html#the-rnsd-utility).
```bash
pip3 install rns
pip install rns
```
You can then start any program that uses Reticulum, or start Reticulum as a system service with [the rnsd utility](https://markqvist.github.io/Reticulum/manual/using.html#the-rnsd-utility).
If you are using an operating system that blocks normal user package installation via `pip`,
you can return `pip` to normal behaviour by editing the `~/.config/pip/pip.conf` file,
and adding the following directive in the `[global]` section:
When first started, Reticulum will create a default configuration file, providing basic connectivity to other Reticulum peers. The default config file contains examples for using Reticulum with LoRa transceivers (specifically [RNode](https://unsigned.io/projects/rnode/)), packet radio TNCs/modems, TCP and UDP.
```text
[global]
break-system-packages = true
```
You can use the examples in the config file to expand communication over many mediums such as packet radio or LoRa (with [RNode](https://unsigned.io/projects/rnode/)), serial ports, or over fast IP links and the Internet using the UDP and TCP interfaces. For more detailed examples, take a look at the [Supported Interfaces](https://markqvist.github.io/Reticulum/manual/interfaces.html) section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
Alternatively, you can use the `pipx` tool to install Reticulum in an isolated environment:
## Current Status
Reticulum should currently be considered beta software. All core protocol features are implemented and functioning, but additions will probably occur as real-world use is explored. There will be bugs. The API and wire-format can be considered relatively stable at the moment, but could change if warranted.
```bash
pipx install rns
```
When first started, Reticulum will create a default configuration file,
providing basic connectivity to other Reticulum peers that might be locally
reachable. The default config file contains a few examples, and references for
creating a more complex configuration.
If you have an old version of `pip` on your system, you may need to upgrade it first with `pip install pip --upgrade`. If you no not already have `pip` installed, you can install it using the package manager of your system with `sudo apt install python3-pip` or similar.
For more detailed examples on how to expand communication over many mediums such
as packet radio or LoRa, serial ports, or over fast IP links and the Internet using
the UDP and TCP interfaces, take a look at the [Supported Interfaces](https://markqvist.github.io/Reticulum/manual/interfaces.html)
section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
## Included Utilities
Reticulum includes a range of useful utilities for managing your networks,
viewing status and information, and other tasks. You can read more about these
programs in the [Included Utility Programs](https://markqvist.github.io/Reticulum/manual/using.html#included-utility-programs)
section of the [Reticulum Manual](https://markqvist.github.io/Reticulum/manual/).
- The system daemon `rnsd` for running Reticulum as an always-available service
- An interface status utility called `rnstatus`, that displays information about interfaces
- The path lookup and management tool `rnpath` letting you view and modify path tables
- A diagnostics tool called `rnprobe` for checking connectivity to destinations
- A simple file transfer program called `rncp` making it easy to transfer files between systems
- The identity management and encryption utility `rnid` let's you manage Identities and encrypt/decrypt files
- The remote command execution program `rnx` let's you run commands and
programs and retrieve output from remote systems
All tools, including `rnx` and `rncp`, work reliably and well even over very
low-bandwidth links like LoRa or Packet Radio. For full-featured remote shells
over Reticulum, also have a look at the [rnsh](https://github.com/acehoss/rnsh)
program.
## Supported interface types and devices
Reticulum implements a range of generalised interface types that covers most of the communications hardware that Reticulum can run over. If your hardware is not supported, it's relatively simple to implement an interface class. I will gratefully accept pull requests for custom interfaces if they are generally useful.
Reticulum implements a range of generalised interface types that covers most of
the communications hardware that Reticulum can run over. If your hardware is
not supported, it's relatively simple to implement an interface class. I will
gratefully accept pull requests for custom interfaces if they are generally
useful.
Currently, the following interfaces are supported:
- Any ethernet device
- LoRa using [RNode](https://unsigned.io/projects/rnode/)
- Any Ethernet device
- LoRa using [RNode](https://unsigned.io/rnode/)
- Packet Radio TNCs (with or without AX.25)
- KISS-compatible hardware and software modems
- Any device with a serial port
@ -94,60 +185,86 @@ Currently, the following interfaces are supported:
- External programs via stdio or pipes
- Custom hardware via stdio or pipes
## Development Roadmap
- Version 0.3.6
- Improving [the manual](https://markqvist.github.io/Reticulum/manual/) with sections specifically for beginners
- Version 0.3.7
- Support for radio and modem interfaces on Android
- GUI interface configuration tool
- Easy way to share interface configurations, see [#19](https://github.com/markqvist/Reticulum/discussions/19)
- More interface types for even broader compatibility
- Plain ESP32 devices (ESP-Now, WiFi, Bluetooth, etc.)
- More LoRa transceivers
- AT-compatible modems
- IR Transceivers
- AWDL / OWL
- HF Modems
- CAN-bus
- ZeroMQ
- MQTT
- SPI
- i²c
- Planned, but not yet scheduled
- Globally routable multicast
- A portable Reticulum implementation in C, see [#21](https://github.com/markqvist/Reticulum/discussions/21)
## Performance
Reticulum targets a *very* wide usable performance envelope, but prioritises
functionality and performance on low-bandwidth mediums. The goal is to
provide a dynamic performance envelope from 250 bits per second, to 1 gigabit
per second on normal hardware.
Currently, the usable performance envelope is approximately 150 bits per second
to 40 megabits per second, with physical mediums faster than that not being
saturated. Performance beyond the current level is intended for future
upgrades, but not highly prioritised at this point in time.
## Current Status
Reticulum should currently be considered beta software. All core protocol
features are implemented and functioning, but additions will probably occur as
real-world use is explored. There will be bugs. The API and wire-format can be
considered relatively stable at the moment, but could change if warranted.
## Dependencies:
- Python 3.6
- cryptography.io
- netifaces
- pyserial
## Dependencies
The installation of the default `rns` package requires the dependencies listed
below. Almost all systems and distributions have readily available packages for
these dependencies, and when the `rns` package is installed with `pip`, they
will be downloaded and installed as well.
- [PyCA/cryptography](https://github.com/pyca/cryptography)
- [pyserial](https://github.com/pyserial/pyserial)
On more unusual systems, and in some rare cases, it might not be possible to
install or even compile one or more of the above modules. In such situations,
you can use the `rnspure` package instead, which require no external
dependencies for installation. Please note that the contents of the `rns` and
`rnspure` packages are *identical*. The only difference is that the `rnspure`
package lists no dependencies required for installation.
No matter how Reticulum is installed and started, it will load external
dependencies only if they are *needed* and *available*. If for example you want
to use Reticulum on a system that cannot support
[pyserial](https://github.com/pyserial/pyserial), it is perfectly possible to
do so using the `rnspure` package, but Reticulum will not be able to use
serial-based interfaces. All other available modules will still be loaded when
needed.
**Please Note!** If you use the `rnspure` package to run Reticulum on systems
that do not support [PyCA/cryptography](https://github.com/pyca/cryptography),
it is important that you read and understand the [Cryptographic
Primitives](#cryptographic-primitives) section of this document.
## Public Testnet
If you just want to get started experimenting without building any physical
networks, you are welcome to join the Unsigned.io RNS Testnet. The testnet is
just that, an informal network for testing and experimenting. It will be up
most of the time, and anyone can join, but it also means that there's no
guarantees for service availability.
If you just want to get started experimenting without building any physical networks, you are welcome to join the Unsigned.io RNS Testnet. The testnet is just that, an informal network for testing and experimenting. It will be up most of the time, and anyone can join, but it also means that there's no guarantees for service availability.
The testnet runs the very latest version of Reticulum (often even a short while before it is publicly released). Sometimes experimental versions of Reticulum might be deployed to nodes on the testnet, which means strange behaviour might occur. If none of that scares you, you can join the testnet via eihter TCP or I2P. Just add one of the following interfaces to your Reticulum configuration file:
The testnet runs the very latest version of Reticulum (often even a short while
before it is publicly released). Sometimes experimental versions of Reticulum
might be deployed to nodes on the testnet, which means strange behaviour might
occur. If none of that scares you, you can join the testnet via either TCP or
I2P. Just add one of the following interfaces to your Reticulum configuration
file:
```
# For connecting over TCP/IP:
[[RNS Testnet Frankfurt]]
# TCP/IP interface to the RNS Amsterdam Hub
[[RNS Testnet Amsterdam]]
type = TCPClientInterface
interface_enabled = yes
outgoing = True
target_host = frankfurt.rns.unsigned.io
enabled = yes
target_host = amsterdam.connect.reticulum.network
target_port = 4965
# TCP/IP interface to the BetweenTheBorders Hub (community-provided)
[[RNS Testnet BetweenTheBorders]]
type = TCPClientInterface
enabled = yes
target_host = betweentheborders.com
target_port = 4242
# For connecting over I2P:
[[RNS Testnet I2P Node A]]
# Interface to Testnet I2P Hub
[[RNS Testnet I2P Hub]]
type = I2PInterface
interface_enabled = yes
peers = ykzlw5ujbaqc2xkec4cpvgyxj257wcrmmgkuxqmqcur7cq3w3lha.b32.i2p
enabled = yes
peers = g3br23bvx3lq5uddcsjii74xgmn6y5q325ovrkq2zw2wbzbqgbuq.b32.i2p
```
The testnet also contains a number of [Nomad Network](https://github.com/markqvist/nomadnet) nodes, and LXMF propagation nodes.
@ -155,11 +272,89 @@ The testnet also contains a number of [Nomad Network](https://github.com/markqvi
## Support Reticulum
You can help support the continued development of open, free and private communications systems by donating via one of the following channels:
- Ethereum: 0x81F7B979fEa6134bA9FD5c701b3501A2e61E897a
- Bitcoin: 3CPmacGm34qYvR6XWLVEJmi2aNe3PZqUuq
- Monero:
```
84FpY1QbxHcgdseePYNmhTHcrgMX4nFfBYtz2GKYToqHVVhJp8Eaw1Z1EedRnKD19b3B8NiLCGVxzKV17UMmmeEsCrPyA5w
```
- Ethereum
```
0xFDabC71AC4c0C78C95aDDDe3B4FA19d6273c5E73
```
- Bitcoin
```
35G9uWVzrpJJibzUwpNUQGQNFzLirhrYAH
```
- Ko-Fi: https://ko-fi.com/markqvist
Are certain features in the development roadmap are important to you or your organisation? Make them a reality quickly by sponsoring their implementation.
Are certain features in the development roadmap are important to you or your
organisation? Make them a reality quickly by sponsoring their implementation.
## Caveat Emptor
Reticulum is relatively young software, and should be considered as such. While it has been built with cryptography best-practices very foremost in mind, it _has not_ been externally security audited, and there could very well be privacy-breaking bugs. If you want to help out, or help sponsor an audit, please do get in touch.
## Cryptographic Primitives
Reticulum uses a simple suite of efficient, strong and modern cryptographic
primitives, with widely available implementations that can be used both on
general-purpose CPUs and on microcontrollers. The necessary primitives are:
- Ed25519 for signatures
- X22519 for ECDH key exchanges
- HKDF for key derivation
- Modified Fernet for encrypted tokens
- AES-128 in CBC mode
- HMAC for message authentication
- No Fernet version and timestamp fields
- SHA-256
- SHA-512
In the default installation configuration, the `X25519`, `Ed25519` and
`AES-128-CBC` primitives are provided by [OpenSSL](https://www.openssl.org/)
(via the [PyCA/cryptography](https://github.com/pyca/cryptography) package).
The hashing functions `SHA-256` and `SHA-512` are provided by the standard
Python [hashlib](https://docs.python.org/3/library/hashlib.html). The `HKDF`,
`HMAC`, `Fernet` primitives, and the `PKCS7` padding function are always
provided by the following internal implementations:
- [HKDF.py](RNS/Cryptography/HKDF.py)
- [HMAC.py](RNS/Cryptography/HMAC.py)
- [Fernet.py](RNS/Cryptography/Fernet.py)
- [PKCS7.py](RNS/Cryptography/PKCS7.py)
Reticulum also includes a complete implementation of all necessary primitives
in pure Python. If OpenSSL & PyCA are not available on the system when
Reticulum is started, Reticulum will instead use the internal pure-python
primitives. A trivial consequence of this is performance, with the OpenSSL
backend being *much* faster. The most important consequence however, is the
potential loss of security by using primitives that has not seen the same
amount of scrutiny, testing and review as those from OpenSSL.
If you want to use the internal pure-python primitives, it is **highly
advisable** that you have a good understanding of the risks that this pose, and
make an informed decision on whether those risks are acceptable to you.
Reticulum is relatively young software, and should be considered as such. While
it has been built with cryptography best-practices very foremost in mind, it
_has not_ been externally security audited, and there could very well be
privacy or security breaking bugs. If you want to help out, or help sponsor an
audit, please do get in touch.
## Acknowledgements & Credits
Reticulum can only exist because of the mountain of Open Source work it was
built on top of, the contributions of everyone involved, and everyone that has
supported the project through the years. To everyone who has helped, thank you
so much.
A number of other modules and projects are either part of, or used by
Reticulum. Sincere thanks to the authors and contributors of the following
projects:
- [PyCA/cryptography](https://github.com/pyca/cryptography), *BSD License*
- [Pure-25519](https://github.com/warner/python-pure25519) by [Brian Warner](https://github.com/warner), *MIT License*
- [Pysha2](https://github.com/thomdixon/pysha2) by [Thom Dixon](https://github.com/thomdixon), *MIT License*
- [Python-AES](https://github.com/orgurar/python-aes) by [Or Gur Arie](https://github.com/orgurar), *MIT License*
- [Curve25519.py](https://gist.github.com/nickovs/cc3c22d15f239a2640c185035c06f8a3#file-curve25519-py) by [Nicko van Someren](https://gist.github.com/nickovs), *Public Domain*
- [I2Plib](https://github.com/l-n-s/i2plib) by [Viktor Villainov](https://github.com/l-n-s)
- [PySerial](https://github.com/pyserial/pyserial) by Chris Liechti, *BSD License*
- [Configobj](https://github.com/DiffSK/configobj) by Michael Foord, Nicola Larosa, Rob Dennis & Eli Courtwright, *BSD License*
- [Six](https://github.com/benjaminp/six) by [Benjamin Peterson](https://github.com/benjaminp), *MIT License*
- [ifaddr](https://github.com/pydron/ifaddr) by [Pydron](https://github.com/pydron), *MIT License*
- [Umsgpack.py](https://github.com/vsergeev/u-msgpack-python) by [Ivan A. Sergeev](https://github.com/vsergeev)
- [Python](https://www.python.org)

359
RNS/Buffer.py Normal file
View File

@ -0,0 +1,359 @@
# MIT License
#
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from __future__ import annotations
import bz2
import sys
import time
import threading
from threading import RLock
import struct
from RNS.Channel import Channel, MessageBase, SystemMessageTypes
import RNS
from io import RawIOBase, BufferedRWPair, BufferedReader, BufferedWriter
from typing import Callable
from contextlib import AbstractContextManager
class StreamDataMessage(MessageBase):
MSGTYPE = SystemMessageTypes.SMT_STREAM_DATA
"""
Message type for ``Channel``. ``StreamDataMessage``
uses a system-reserved message type.
"""
STREAM_ID_MAX = 0x3fff # 16383
"""
The stream id is limited to 2 bytes - 2 bit
"""
MAX_DATA_LEN = RNS.Link.MDU - 2 - 6 # 2 for stream data message header, 6 for channel envelope
"""
When the Buffer package is imported, this value is
calculcated based on the value of OVERHEAD
"""
def __init__(self, stream_id: int = None, data: bytes = None, eof: bool = False, compressed: bool = False):
"""
This class is used to encapsulate binary stream
data to be sent over a ``Channel``.
:param stream_id: id of stream relative to receiver
:param data: binary data
:param eof: set to True if signalling End of File
"""
super().__init__()
if stream_id is not None and stream_id > self.STREAM_ID_MAX:
raise ValueError("stream_id must be 0-16383")
self.stream_id = stream_id
self.compressed = compressed
self.data = data or bytes()
self.eof = eof
def pack(self) -> bytes:
if self.stream_id is None:
raise ValueError("stream_id")
header_val = (0x3fff & self.stream_id) | (0x8000 if self.eof else 0x0000) | (0x4000 if self.compressed > 0 else 0x0000)
return bytes(struct.pack(">H", header_val) + (self.data if self.data else bytes()))
def unpack(self, raw):
self.stream_id = struct.unpack(">H", raw[:2])[0]
self.eof = (0x8000 & self.stream_id) > 0
self.compressed = (0x4000 & self.stream_id) > 0
self.stream_id = self.stream_id & 0x3fff
self.data = raw[2:]
if self.compressed:
self.data = bz2.decompress(self.data)
class RawChannelReader(RawIOBase, AbstractContextManager):
"""
An implementation of RawIOBase that receives
binary stream data sent over a ``Channel``.
This class generally need not be instantiated directly.
Use :func:`RNS.Buffer.create_reader`,
:func:`RNS.Buffer.create_writer`, and
:func:`RNS.Buffer.create_bidirectional_buffer` functions
to create buffered streams with optional callbacks.
For additional information on the API of this
object, see the Python documentation for
``RawIOBase``.
"""
def __init__(self, stream_id: int, channel: Channel):
"""
Create a raw channel reader.
:param stream_id: local stream id to receive at
:param channel: ``Channel`` object to receive from
"""
self._stream_id = stream_id
self._channel = channel
self._lock = RLock()
self._buffer = bytearray()
self._eof = False
self._channel._register_message_type(StreamDataMessage, is_system_type=True)
self._channel.add_message_handler(self._handle_message)
self._listeners: [Callable[[int], None]] = []
def add_ready_callback(self, cb: Callable[[int], None]):
"""
Add a function to be called when new data is available.
The function should have the signature ``(ready_bytes: int) -> None``
:param cb: function to call
"""
with self._lock:
self._listeners.append(cb)
def remove_ready_callback(self, cb: Callable[[int], None]):
"""
Remove a function added with :func:`RNS.RawChannelReader.add_ready_callback()`
:param cb: function to remove
"""
with self._lock:
self._listeners.remove(cb)
def _handle_message(self, message: MessageBase):
if isinstance(message, StreamDataMessage):
if message.stream_id == self._stream_id:
with self._lock:
if message.data is not None:
self._buffer.extend(message.data)
if message.eof:
self._eof = True
for listener in self._listeners:
try:
threading.Thread(target=listener, name="Message Callback", args=[len(self._buffer)], daemon=True).start()
except Exception as ex:
RNS.log("Error calling RawChannelReader(" + str(self._stream_id) + ") callback: " + str(ex), RNS.LOG_ERROR)
return True
return False
def _read(self, __size: int) -> bytes | None:
with self._lock:
result = self._buffer[:__size]
self._buffer = self._buffer[__size:]
return result if len(result) > 0 or self._eof else None
def readinto(self, __buffer: bytearray) -> int | None:
ready = self._read(len(__buffer))
if ready is not None:
__buffer[:len(ready)] = ready
return len(ready) if ready is not None else None
def writable(self) -> bool:
return False
def seekable(self) -> bool:
return False
def readable(self) -> bool:
return True
def close(self):
with self._lock:
self._channel.remove_message_handler(self._handle_message)
self._listeners.clear()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
return False
class RawChannelWriter(RawIOBase, AbstractContextManager):
"""
An implementation of RawIOBase that receives
binary stream data sent over a channel.
This class generally need not be instantiated directly.
Use :func:`RNS.Buffer.create_reader`,
:func:`RNS.Buffer.create_writer`, and
:func:`RNS.Buffer.create_bidirectional_buffer` functions
to create buffered streams with optional callbacks.
For additional information on the API of this
object, see the Python documentation for
``RawIOBase``.
"""
MAX_CHUNK_LEN = 1024*16
COMPRESSION_TRIES = 4
def __init__(self, stream_id: int, channel: Channel):
"""
Create a raw channel writer.
:param stream_id: remote stream id to sent do
:param channel: ``Channel`` object to send on
"""
self._stream_id = stream_id
self._channel = channel
self._eof = False
def write(self, __b: bytes) -> int | None:
try:
comp_tries = RawChannelWriter.COMPRESSION_TRIES
comp_try = 1
comp_success = False
chunk_len = len(__b)
if chunk_len > RawChannelWriter.MAX_CHUNK_LEN:
chunk_len = RawChannelWriter.MAX_CHUNK_LEN
__b = __b[:RawChannelWriter.MAX_CHUNK_LEN]
chunk_segment = None
while chunk_len > 32 and comp_try < comp_tries:
chunk_segment_length = int(chunk_len/comp_try)
compressed_chunk = bz2.compress(__b[:chunk_segment_length])
compressed_length = len(compressed_chunk)
if compressed_length < StreamDataMessage.MAX_DATA_LEN and compressed_length < chunk_segment_length:
comp_success = True
break
else:
comp_try += 1
if comp_success:
chunk = compressed_chunk
processed_length = chunk_segment_length
else:
chunk = bytes(__b[:StreamDataMessage.MAX_DATA_LEN])
processed_length = len(chunk)
message = StreamDataMessage(self._stream_id, chunk, self._eof, comp_success)
self._channel.send(message)
return processed_length
except RNS.Channel.ChannelException as cex:
if cex.type != RNS.Channel.CEType.ME_LINK_NOT_READY:
raise
return 0
def close(self):
try:
link_rtt = self._channel._outlet.link.rtt
timeout = time.time() + (link_rtt * len(self._channel._tx_ring) * 1)
except Exception as e:
timeout = time.time() + 15
while time.time() < timeout and not self._channel.is_ready_to_send():
time.sleep(0.05)
self._eof = True
self.write(bytes())
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
return False
def seekable(self) -> bool:
return False
def readable(self) -> bool:
return False
def writable(self) -> bool:
return True
class Buffer:
"""
Static functions for creating buffered streams that send
and receive over a ``Channel``.
These functions use ``BufferedReader``, ``BufferedWriter``,
and ``BufferedRWPair`` to add buffering to
``RawChannelReader`` and ``RawChannelWriter``.
"""
@staticmethod
def create_reader(stream_id: int, channel: Channel,
ready_callback: Callable[[int], None] | None = None) -> BufferedReader:
"""
Create a buffered reader that reads binary data sent
over a ``Channel``, with an optional callback when
new data is available.
Callback signature: ``(ready_bytes: int) -> None``
For more information on the reader-specific functions
of this object, see the Python documentation for
``BufferedReader``
:param stream_id: the local stream id to receive from
:param channel: the channel to receive on
:param ready_callback: function to call when new data is available
:return: a BufferedReader object
"""
reader = RawChannelReader(stream_id, channel)
if ready_callback:
reader.add_ready_callback(ready_callback)
return BufferedReader(reader)
@staticmethod
def create_writer(stream_id: int, channel: Channel) -> BufferedWriter:
"""
Create a buffered writer that writes binary data over
a ``Channel``.
For more information on the writer-specific functions
of this object, see the Python documentation for
``BufferedWriter``
:param stream_id: the remote stream id to send to
:param channel: the channel to send on
:return: a BufferedWriter object
"""
writer = RawChannelWriter(stream_id, channel)
return BufferedWriter(writer)
@staticmethod
def create_bidirectional_buffer(receive_stream_id: int, send_stream_id: int, channel: Channel,
ready_callback: Callable[[int], None] | None = None) -> BufferedRWPair:
"""
Create a buffered reader/writer pair that reads and
writes binary data over a ``Channel``, with an
optional callback when new data is available.
Callback signature: ``(ready_bytes: int) -> None``
For more information on the reader-specific functions
of this object, see the Python documentation for
``BufferedRWPair``
:param receive_stream_id: the local stream id to receive at
:param send_stream_id: the remote stream id to send to
:param channel: the channel to send and receive on
:param ready_callback: function to call when new data is available
:return: a BufferedRWPair object
"""
reader = RawChannelReader(receive_stream_id, channel)
if ready_callback:
reader.add_ready_callback(ready_callback)
writer = RawChannelWriter(send_stream_id, channel)
return BufferedRWPair(reader, writer)

694
RNS/Channel.py Normal file
View File

@ -0,0 +1,694 @@
# MIT License
#
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from __future__ import annotations
import collections
import enum
import threading
import time
from types import TracebackType
from typing import Type, Callable, TypeVar, Generic, NewType
import abc
import contextlib
import struct
import RNS
from abc import ABC, abstractmethod
TPacket = TypeVar("TPacket")
class SystemMessageTypes(enum.IntEnum):
SMT_STREAM_DATA = 0xff00
class ChannelOutletBase(ABC, Generic[TPacket]):
"""
An abstract transport layer interface used by Channel.
DEPRECATED: This was created for testing; eventually
Channel will use Link or a LinkBase interface
directly.
"""
@abstractmethod
def send(self, raw: bytes) -> TPacket:
raise NotImplemented()
@abstractmethod
def resend(self, packet: TPacket) -> TPacket:
raise NotImplemented()
@property
@abstractmethod
def mdu(self):
raise NotImplemented()
@property
@abstractmethod
def rtt(self):
raise NotImplemented()
@property
@abstractmethod
def is_usable(self):
raise NotImplemented()
@abstractmethod
def get_packet_state(self, packet: TPacket) -> MessageState:
raise NotImplemented()
@abstractmethod
def timed_out(self):
raise NotImplemented()
@abstractmethod
def __str__(self):
raise NotImplemented()
@abstractmethod
def set_packet_timeout_callback(self, packet: TPacket, callback: Callable[[TPacket], None] | None,
timeout: float | None = None):
raise NotImplemented()
@abstractmethod
def set_packet_delivered_callback(self, packet: TPacket, callback: Callable[[TPacket], None] | None):
raise NotImplemented()
@abstractmethod
def get_packet_id(self, packet: TPacket) -> any:
raise NotImplemented()
class CEType(enum.IntEnum):
"""
ChannelException type codes
"""
ME_NO_MSG_TYPE = 0
ME_INVALID_MSG_TYPE = 1
ME_NOT_REGISTERED = 2
ME_LINK_NOT_READY = 3
ME_ALREADY_SENT = 4
ME_TOO_BIG = 5
class ChannelException(Exception):
"""
An exception thrown by Channel, with a type code.
"""
def __init__(self, ce_type: CEType, *args):
super().__init__(args)
self.type = ce_type
class MessageState(enum.IntEnum):
"""
Set of possible states for a Message
"""
MSGSTATE_NEW = 0
MSGSTATE_SENT = 1
MSGSTATE_DELIVERED = 2
MSGSTATE_FAILED = 3
class MessageBase(abc.ABC):
"""
Base type for any messages sent or received on a Channel.
Subclasses must define the two abstract methods as well as
the ``MSGTYPE`` class variable.
"""
# MSGTYPE must be unique within all classes sent over a
# channel. Additionally, MSGTYPE > 0xf000 are reserved.
MSGTYPE = None
"""
Defines a unique identifier for a message class.
* Must be unique within all classes registered with a ``Channel``
* Must be less than ``0xf000``. Values greater than or equal to ``0xf000`` are reserved.
"""
@abstractmethod
def pack(self) -> bytes:
"""
Create and return the binary representation of the message
:return: binary representation of message
"""
raise NotImplemented()
@abstractmethod
def unpack(self, raw: bytes):
"""
Populate message from binary representation
:param raw: binary representation
"""
raise NotImplemented()
MessageCallbackType = NewType("MessageCallbackType", Callable[[MessageBase], bool])
class Envelope:
"""
Internal wrapper used to transport messages over a channel and
track its state within the channel framework.
"""
def unpack(self, message_factories: dict[int, Type]) -> MessageBase:
msgtype, self.sequence, length = struct.unpack(">HHH", self.raw[:6])
raw = self.raw[6:]
ctor = message_factories.get(msgtype, None)
if ctor is None:
raise ChannelException(CEType.ME_NOT_REGISTERED, f"Unable to find constructor for Channel MSGTYPE {hex(msgtype)}")
message = ctor()
message.unpack(raw)
self.unpacked = True
self.message = message
return message
def pack(self) -> bytes:
if self.message.__class__.MSGTYPE is None:
raise ChannelException(CEType.ME_NO_MSG_TYPE, f"{self.message.__class__} lacks MSGTYPE")
data = self.message.pack()
self.raw = struct.pack(">HHH", self.message.MSGTYPE, self.sequence, len(data)) + data
self.packed = True
return self.raw
def __init__(self, outlet: ChannelOutletBase, message: MessageBase = None, raw: bytes = None, sequence: int = None):
self.ts = time.time()
self.id = id(self)
self.message = message
self.raw = raw
self.packet: TPacket = None
self.sequence = sequence
self.outlet = outlet
self.tries = 0
self.unpacked = False
self.packed = False
self.tracked = False
class Channel(contextlib.AbstractContextManager):
"""
Provides reliable delivery of messages over
a link.
``Channel`` differs from ``Request`` and
``Resource`` in some important ways:
**Continuous**
Messages can be sent or received as long as
the ``Link`` is open.
**Bi-directional**
Messages can be sent in either direction on
the ``Link``; neither end is the client or
server.
**Size-constrained**
Messages must be encoded into a single packet.
``Channel`` is similar to ``Packet``, except that it
provides reliable delivery (automatic retries) as well
as a structure for exchanging several types of
messages over the ``Link``.
``Channel`` is not instantiated directly, but rather
obtained from a ``Link`` with ``get_channel()``.
"""
# The initial window size at channel setup
WINDOW = 2
# Absolute minimum window size
WINDOW_MIN = 2
WINDOW_MIN_LIMIT_SLOW = 2
WINDOW_MIN_LIMIT_MEDIUM = 5
WINDOW_MIN_LIMIT_FAST = 16
# The maximum window size for transfers on slow links
WINDOW_MAX_SLOW = 5
# The maximum window size for transfers on mid-speed links
WINDOW_MAX_MEDIUM = 12
# The maximum window size for transfers on fast links
WINDOW_MAX_FAST = 48
# For calculating maps and guard segments, this
# must be set to the global maximum window.
WINDOW_MAX = WINDOW_MAX_FAST
# If the fast rate is sustained for this many request
# rounds, the fast link window size will be allowed.
FAST_RATE_THRESHOLD = 10
# If the RTT rate is higher than this value,
# the max window size for fast links will be used.
RTT_FAST = 0.18
RTT_MEDIUM = 0.75
RTT_SLOW = 1.45
# The minimum allowed flexibility of the window size.
# The difference between window_max and window_min
# will never be smaller than this value.
WINDOW_FLEXIBILITY = 4
SEQ_MAX = 0xFFFF
SEQ_MODULUS = SEQ_MAX+1
def __init__(self, outlet: ChannelOutletBase):
"""
@param outlet:
"""
self._outlet = outlet
self._lock = threading.RLock()
self._tx_ring: collections.deque[Envelope] = collections.deque()
self._rx_ring: collections.deque[Envelope] = collections.deque()
self._message_callbacks: [MessageCallbackType] = []
self._next_sequence = 0
self._next_rx_sequence = 0
self._message_factories: dict[int, Type[MessageBase]] = {}
self._max_tries = 5
self.fast_rate_rounds = 0
self.medium_rate_rounds = 0
if self._outlet.rtt > Channel.RTT_SLOW:
self.window = 1
self.window_max = 1
self.window_min = 1
self.window_flexibility = 1
else:
self.window = Channel.WINDOW
self.window_max = Channel.WINDOW_MAX_SLOW
self.window_min = Channel.WINDOW_MIN
self.window_flexibility = Channel.WINDOW_FLEXIBILITY
def __enter__(self) -> Channel:
return self
def __exit__(self, __exc_type: Type[BaseException] | None, __exc_value: BaseException | None,
__traceback: TracebackType | None) -> bool | None:
self._shutdown()
return False
def register_message_type(self, message_class: Type[MessageBase]):
"""
Register a message class for reception over a ``Channel``.
Message classes must extend ``MessageBase``.
:param message_class: Class to register
"""
self._register_message_type(message_class, is_system_type=False)
def _register_message_type(self, message_class: Type[MessageBase], *, is_system_type: bool = False):
with self._lock:
if not issubclass(message_class, MessageBase):
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
f"{message_class} is not a subclass of {MessageBase}.")
if message_class.MSGTYPE is None:
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
f"{message_class} has invalid MSGTYPE class attribute.")
if message_class.MSGTYPE >= 0xf000 and not is_system_type:
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
f"{message_class} has system-reserved message type.")
try:
message_class()
except Exception as ex:
raise ChannelException(CEType.ME_INVALID_MSG_TYPE,
f"{message_class} raised an exception when constructed with no arguments: {ex}")
self._message_factories[message_class.MSGTYPE] = message_class
def add_message_handler(self, callback: MessageCallbackType):
"""
Add a handler for incoming messages. A handler
has the following signature:
``(message: MessageBase) -> bool``
Handlers are processed in the order they are
added. If any handler returns True, processing
of the message stops; handlers after the
returning handler will not be called.
:param callback: Function to call
"""
with self._lock:
if callback not in self._message_callbacks:
self._message_callbacks.append(callback)
def remove_message_handler(self, callback: MessageCallbackType):
"""
Remove a handler added with ``add_message_handler``.
:param callback: handler to remove
"""
with self._lock:
if callback in self._message_callbacks:
self._message_callbacks.remove(callback)
def _shutdown(self):
with self._lock:
self._message_callbacks.clear()
self._clear_rings()
def _clear_rings(self):
with self._lock:
for envelope in self._tx_ring:
if envelope.packet is not None:
self._outlet.set_packet_timeout_callback(envelope.packet, None)
self._outlet.set_packet_delivered_callback(envelope.packet, None)
self._tx_ring.clear()
self._rx_ring.clear()
def _emplace_envelope(self, envelope: Envelope, ring: collections.deque[Envelope]) -> bool:
with self._lock:
i = 0
for existing in ring:
if envelope.sequence == existing.sequence:
RNS.log(f"Envelope: Emplacement of duplicate envelope with sequence "+str(envelope.sequence), RNS.LOG_EXTREME)
return False
if envelope.sequence < existing.sequence and not (self._next_rx_sequence - envelope.sequence) > (Channel.SEQ_MAX//2):
ring.insert(i, envelope)
envelope.tracked = True
return True
i += 1
envelope.tracked = True
ring.append(envelope)
return True
def _run_callbacks(self, message: MessageBase):
cbs = self._message_callbacks.copy()
for cb in cbs:
try:
if cb(message):
return
except Exception as e:
RNS.log("Channel "+str(self)+" experienced an error while running a message callback. The contained exception was: "+str(e), RNS.LOG_ERROR)
def _receive(self, raw: bytes):
try:
envelope = Envelope(outlet=self._outlet, raw=raw)
with self._lock:
message = envelope.unpack(self._message_factories)
if envelope.sequence < self._next_rx_sequence:
window_overflow = (self._next_rx_sequence+Channel.WINDOW_MAX) % Channel.SEQ_MODULUS
if window_overflow < self._next_rx_sequence:
if envelope.sequence > window_overflow:
RNS.log("Invalid packet sequence ("+str(envelope.sequence)+") received on channel "+str(self), RNS.LOG_EXTREME)
return
else:
RNS.log("Invalid packet sequence ("+str(envelope.sequence)+") received on channel "+str(self), RNS.LOG_EXTREME)
return
is_new = self._emplace_envelope(envelope, self._rx_ring)
if not is_new:
RNS.log("Duplicate message received on channel "+str(self), RNS.LOG_EXTREME)
return
else:
with self._lock:
contigous = []
for e in self._rx_ring:
if e.sequence == self._next_rx_sequence:
contigous.append(e)
self._next_rx_sequence = (self._next_rx_sequence + 1) % Channel.SEQ_MODULUS
if self._next_rx_sequence == 0:
for e in self._rx_ring:
if e.sequence == self._next_rx_sequence:
contigous.append(e)
self._next_rx_sequence = (self._next_rx_sequence + 1) % Channel.SEQ_MODULUS
for e in contigous:
if not e.unpacked:
m = e.unpack(self._message_factories)
else:
m = e.message
self._rx_ring.remove(e)
self._run_callbacks(m)
except Exception as e:
RNS.log("An error ocurred while receiving data on "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
def is_ready_to_send(self) -> bool:
"""
Check if ``Channel`` is ready to send.
:return: True if ready
"""
if not self._outlet.is_usable:
return False
with self._lock:
outstanding = 0
for envelope in self._tx_ring:
if envelope.outlet == self._outlet:
if not envelope.packet or not self._outlet.get_packet_state(envelope.packet) == MessageState.MSGSTATE_DELIVERED:
outstanding += 1
if outstanding >= self.window:
return False
return True
def _packet_tx_op(self, packet: TPacket, op: Callable[[TPacket], bool]):
with self._lock:
envelope = next(filter(lambda e: self._outlet.get_packet_id(e.packet) == self._outlet.get_packet_id(packet),
self._tx_ring), None)
if envelope and op(envelope):
envelope.tracked = False
if envelope in self._tx_ring:
self._tx_ring.remove(envelope)
if self.window < self.window_max:
self.window += 1
# TODO: Remove at some point
# RNS.log("Increased "+str(self)+" window to "+str(self.window), RNS.LOG_DEBUG)
if self._outlet.rtt != 0:
if self._outlet.rtt > Channel.RTT_FAST:
self.fast_rate_rounds = 0
if self._outlet.rtt > Channel.RTT_MEDIUM:
self.medium_rate_rounds = 0
else:
self.medium_rate_rounds += 1
if self.window_max < Channel.WINDOW_MAX_MEDIUM and self.medium_rate_rounds == Channel.FAST_RATE_THRESHOLD:
self.window_max = Channel.WINDOW_MAX_MEDIUM
self.window_min = Channel.WINDOW_MIN_LIMIT_MEDIUM
# TODO: Remove at some point
# RNS.log("Increased "+str(self)+" max window to "+str(self.window_max), RNS.LOG_DEBUG)
# RNS.log("Increased "+str(self)+" min window to "+str(self.window_min), RNS.LOG_DEBUG)
else:
self.fast_rate_rounds += 1
if self.window_max < Channel.WINDOW_MAX_FAST and self.fast_rate_rounds == Channel.FAST_RATE_THRESHOLD:
self.window_max = Channel.WINDOW_MAX_FAST
self.window_min = Channel.WINDOW_MIN_LIMIT_FAST
# TODO: Remove at some point
# RNS.log("Increased "+str(self)+" max window to "+str(self.window_max), RNS.LOG_DEBUG)
# RNS.log("Increased "+str(self)+" min window to "+str(self.window_min), RNS.LOG_DEBUG)
else:
RNS.log("Envelope not found in TX ring for "+str(self), RNS.LOG_EXTREME)
if not envelope:
RNS.log("Spurious message received on "+str(self), RNS.LOG_EXTREME)
def _packet_delivered(self, packet: TPacket):
self._packet_tx_op(packet, lambda env: True)
def _update_packet_timeouts(self):
for envelope in self._tx_ring:
updated_timeout = self._get_packet_timeout_time(envelope.tries)
if envelope.packet and hasattr(envelope.packet, "receipt") and envelope.packet.receipt and envelope.packet.receipt.timeout:
if updated_timeout > envelope.packet.receipt.timeout:
envelope.packet.receipt.set_timeout(updated_timeout)
def _get_packet_timeout_time(self, tries: int) -> float:
to = pow(1.5, tries - 1) * max(self._outlet.rtt*2.5, 0.025) * (len(self._tx_ring)+1.5)
return to
def _packet_timeout(self, packet: TPacket):
def retry_envelope(envelope: Envelope) -> bool:
if envelope.tries >= self._max_tries:
RNS.log("Retry count exceeded on "+str(self)+", tearing down Link.", RNS.LOG_ERROR)
self._shutdown() # start on separate thread?
self._outlet.timed_out()
return True
envelope.tries += 1
self._outlet.resend(envelope.packet)
self._outlet.set_packet_delivered_callback(envelope.packet, self._packet_delivered)
self._outlet.set_packet_timeout_callback(envelope.packet, self._packet_timeout, self._get_packet_timeout_time(envelope.tries))
self._update_packet_timeouts()
if self.window > self.window_min:
self.window -= 1
# TODO: Remove at some point
# RNS.log("Decreased "+str(self)+" window to "+str(self.window), RNS.LOG_DEBUG)
if self.window_max > (self.window_min+self.window_flexibility):
self.window_max -= 1
# TODO: Remove at some point
# RNS.log("Decreased "+str(self)+" max window to "+str(self.window_max), RNS.LOG_DEBUG)
# TODO: Remove at some point
# RNS.log("Decreased "+str(self)+" window to "+str(self.window), RNS.LOG_EXTREME)
return False
if self._outlet.get_packet_state(packet) != MessageState.MSGSTATE_DELIVERED:
self._packet_tx_op(packet, retry_envelope)
def send(self, message: MessageBase) -> Envelope:
"""
Send a message. If a message send is attempted and
``Channel`` is not ready, an exception is thrown.
:param message: an instance of a ``MessageBase`` subclass
"""
envelope: Envelope | None = None
with self._lock:
if not self.is_ready_to_send():
raise ChannelException(CEType.ME_LINK_NOT_READY, f"Link is not ready")
envelope = Envelope(self._outlet, message=message, sequence=self._next_sequence)
self._next_sequence = (self._next_sequence + 1) % Channel.SEQ_MODULUS
self._emplace_envelope(envelope, self._tx_ring)
if envelope is None:
raise BlockingIOError()
envelope.pack()
if len(envelope.raw) > self._outlet.mdu:
raise ChannelException(CEType.ME_TOO_BIG, f"Packed message too big for packet: {len(envelope.raw)} > {self._outlet.mdu}")
envelope.packet = self._outlet.send(envelope.raw)
envelope.tries += 1
self._outlet.set_packet_delivered_callback(envelope.packet, self._packet_delivered)
self._outlet.set_packet_timeout_callback(envelope.packet, self._packet_timeout, self._get_packet_timeout_time(envelope.tries))
self._update_packet_timeouts()
return envelope
@property
def MDU(self):
"""
Maximum Data Unit: the number of bytes available
for a message to consume in a single send. This
value is adjusted from the ``Link`` MDU to accommodate
message header information.
:return: number of bytes available
"""
return self._outlet.mdu - 6 # sizeof(msgtype) + sizeof(length) + sizeof(sequence)
class LinkChannelOutlet(ChannelOutletBase):
"""
An implementation of ChannelOutletBase for RNS.Link.
Allows Channel to send packets over an RNS Link with
Packets.
:param link: RNS Link to wrap
"""
def __init__(self, link: RNS.Link):
self.link = link
def send(self, raw: bytes) -> RNS.Packet:
packet = RNS.Packet(self.link, raw, context=RNS.Packet.CHANNEL)
if self.link.status == RNS.Link.ACTIVE:
packet.send()
return packet
def resend(self, packet: RNS.Packet) -> RNS.Packet:
receipt = packet.resend()
if not receipt:
RNS.log("Failed to resend packet", RNS.LOG_ERROR)
return packet
@property
def mdu(self):
return self.link.MDU
@property
def rtt(self):
return self.link.rtt
@property
def is_usable(self):
return True # had issues looking at Link.status
def get_packet_state(self, packet: TPacket) -> MessageState:
if packet.receipt == None:
return MessageState.MSGSTATE_FAILED
status = packet.receipt.get_status()
if status == RNS.PacketReceipt.SENT:
return MessageState.MSGSTATE_SENT
if status == RNS.PacketReceipt.DELIVERED:
return MessageState.MSGSTATE_DELIVERED
if status == RNS.PacketReceipt.FAILED:
return MessageState.MSGSTATE_FAILED
else:
raise Exception(f"Unexpected receipt state: {status}")
def timed_out(self):
self.link.teardown()
def __str__(self):
return f"{self.__class__.__name__}({self.link})"
def set_packet_timeout_callback(self, packet: RNS.Packet, callback: Callable[[RNS.Packet], None] | None,
timeout: float | None = None):
if timeout and packet.receipt:
packet.receipt.set_timeout(timeout)
def inner(receipt: RNS.PacketReceipt):
callback(packet)
if packet and packet.receipt:
packet.receipt.set_timeout_callback(inner if callback else None)
def set_packet_delivered_callback(self, packet: RNS.Packet, callback: Callable[[RNS.Packet], None] | None):
def inner(receipt: RNS.PacketReceipt):
callback(packet)
if packet and packet.receipt:
packet.receipt.set_delivery_callback(inner if callback else None)
def get_packet_id(self, packet: RNS.Packet) -> any:
if packet and hasattr(packet, "get_hash") and callable(packet.get_hash):
return packet.get_hash()
else:
return None

68
RNS/Cryptography/AES.py Normal file
View File

@ -0,0 +1,68 @@
# MIT License
#
# Copyright (c) 2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import RNS.Cryptography.Provider as cp
import RNS.vendor.platformutils as pu
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
from .aes import AES
elif cp.PROVIDER == cp.PROVIDER_PYCA:
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
if pu.cryptography_old_api():
from cryptography.hazmat.backends import default_backend
class AES_128_CBC:
@staticmethod
def encrypt(plaintext, key, iv):
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
cipher = AES(key)
return cipher.encrypt(plaintext, iv)
elif cp.PROVIDER == cp.PROVIDER_PYCA:
if not pu.cryptography_old_api():
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))
else:
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
encryptor = cipher.encryptor()
ciphertext = encryptor.update(plaintext) + encryptor.finalize()
return ciphertext
@staticmethod
def decrypt(ciphertext, key, iv):
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
cipher = AES(key)
return cipher.decrypt(ciphertext, iv)
elif cp.PROVIDER == cp.PROVIDER_PYCA:
if not pu.cryptography_old_api():
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))
else:
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
decryptor = cipher.decryptor()
plaintext = decryptor.update(ciphertext) + decryptor.finalize()
return plaintext

View File

@ -0,0 +1,41 @@
import os
from .pure25519 import ed25519_oop as ed25519
class Ed25519PrivateKey:
def __init__(self, seed):
self.seed = seed
self.sk = ed25519.SigningKey(self.seed)
#self.vk = self.sk.get_verifying_key()
@classmethod
def generate(cls):
return cls.from_private_bytes(os.urandom(32))
@classmethod
def from_private_bytes(cls, data):
return cls(seed=data)
def private_bytes(self):
return self.seed
def public_key(self):
return Ed25519PublicKey.from_public_bytes(self.sk.vk_s)
def sign(self, message):
return self.sk.sign(message)
class Ed25519PublicKey:
def __init__(self, seed):
self.seed = seed
self.vk = ed25519.VerifyingKey(self.seed)
@classmethod
def from_public_bytes(cls, data):
return cls(data)
def public_bytes(self):
return self.vk.to_bytes()
def verify(self, signature, message):
self.vk.verify(signature, message)

110
RNS/Cryptography/Fernet.py Normal file
View File

@ -0,0 +1,110 @@
# MIT License
#
# Copyright (c) 2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import os
import time
from RNS.Cryptography import HMAC
from RNS.Cryptography import PKCS7
from RNS.Cryptography.AES import AES_128_CBC
class Fernet():
"""
This class provides a slightly modified implementation of the Fernet spec
found at: https://github.com/fernet/spec/blob/master/Spec.md
According to the spec, a Fernet token includes a one byte VERSION and
eight byte TIMESTAMP field at the start of each token. These fields are
not relevant to Reticulum. They are therefore stripped from this
implementation, since they incur overhead and leak initiator metadata.
"""
FERNET_OVERHEAD = 48 # Bytes
@staticmethod
def generate_key():
return os.urandom(32)
def __init__(self, key = None):
if key == None:
raise ValueError("Token key cannot be None")
if len(key) != 32:
raise ValueError("Token key must be 32 bytes, not "+str(len(key)))
self._signing_key = key[:16]
self._encryption_key = key[16:]
def verify_hmac(self, token):
if len(token) <= 32:
raise ValueError("Cannot verify HMAC on token of only "+str(len(token))+" bytes")
else:
received_hmac = token[-32:]
expected_hmac = HMAC.new(self._signing_key, token[:-32]).digest()
if received_hmac == expected_hmac:
return True
else:
return False
def encrypt(self, data = None):
iv = os.urandom(16)
current_time = int(time.time())
if not isinstance(data, bytes):
raise TypeError("Token plaintext input must be bytes")
ciphertext = AES_128_CBC.encrypt(
plaintext = PKCS7.pad(data),
key = self._encryption_key,
iv = iv,
)
signed_parts = iv+ciphertext
return signed_parts + HMAC.new(self._signing_key, signed_parts).digest()
def decrypt(self, token = None):
if not isinstance(token, bytes):
raise TypeError("Token must be bytes")
if not self.verify_hmac(token):
raise ValueError("Token HMAC was invalid")
iv = token[:16]
ciphertext = token[16:-32]
try:
plaintext = PKCS7.unpad(
AES_128_CBC.decrypt(
ciphertext,
self._encryption_key,
iv,
)
)
return plaintext
except Exception as e:
raise ValueError("Could not decrypt token")

54
RNS/Cryptography/HKDF.py Normal file
View File

@ -0,0 +1,54 @@
# MIT License
#
# Copyright (c) 2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import hashlib
from math import ceil
from RNS.Cryptography import HMAC
def hkdf(length=None, derive_from=None, salt=None, context=None):
hash_len = 32
def hmac_sha256(key, data):
return HMAC.new(key, data).digest()
if length == None or length < 1:
raise ValueError("Invalid output key length")
if derive_from == None or derive_from == "":
raise ValueError("Cannot derive key from empty input material")
if salt == None or len(salt) == 0:
salt = bytes([0] * hash_len)
if context == None:
context = b""
pseudorandom_key = hmac_sha256(salt, derive_from)
block = b""
derived = b""
for i in range(ceil(length / hash_len)):
block = hmac_sha256(pseudorandom_key, block + context + bytes([i + 1]))
derived += block
return derived[:length]

183
RNS/Cryptography/HMAC.py Normal file
View File

@ -0,0 +1,183 @@
# This HMAC implementation comes directly from the HMAC implementation
# included in Python 3.10.4, and is almost completely identical. It has
# been modified to be a pure Python implementation, that is not dependent
# on the system having OpenSSL binaries installed.
import warnings as _warnings
import hashlib as _hashlib
trans_5C = bytes((x ^ 0x5C) for x in range(256))
trans_36 = bytes((x ^ 0x36) for x in range(256))
# The size of the digests returned by HMAC depends on the underlying
# hashing module used. Use digest_size from the instance of HMAC instead.
digest_size = None
class HMAC:
"""RFC 2104 HMAC class. Also complies with RFC 4231.
This supports the API for Cryptographic Hash Functions (PEP 247).
"""
blocksize = 64 # 512-bit HMAC; can be changed in subclasses.
__slots__ = (
"_hmac", "_inner", "_outer", "block_size", "digest_size"
)
def __init__(self, key, msg=None, digestmod=_hashlib.sha256):
"""Create a new HMAC object.
key: bytes or buffer, key for the keyed hash object.
msg: bytes or buffer, Initial input for the hash or None.
digestmod: A hash name suitable for hashlib.new(). *OR*
A hashlib constructor returning a new hash object. *OR*
A module supporting PEP 247.
Required as of 3.8, despite its position after the optional
msg argument. Passing it as a keyword argument is
recommended, though not required for legacy API reasons.
"""
if not isinstance(key, (bytes, bytearray)):
raise TypeError("key: expected bytes or bytearray, but got %r" % type(key).__name__)
if not digestmod:
raise TypeError("Missing required parameter 'digestmod'.")
self._hmac_init(key, msg, digestmod)
def _hmac_init(self, key, msg, digestmod):
if callable(digestmod):
digest_cons = digestmod
elif isinstance(digestmod, str):
digest_cons = lambda d=b'': _hashlib.new(digestmod, d)
else:
digest_cons = lambda d=b'': digestmod.new(d)
self._hmac = None
self._outer = digest_cons()
self._inner = digest_cons()
self.digest_size = self._inner.digest_size
if hasattr(self._inner, 'block_size'):
blocksize = self._inner.block_size
if blocksize < 16:
_warnings.warn('block_size of %d seems too small; using our '
'default of %d.' % (blocksize, self.blocksize),
RuntimeWarning, 2)
blocksize = self.blocksize
else:
_warnings.warn('No block_size attribute on given digest object; '
'Assuming %d.' % (self.blocksize),
RuntimeWarning, 2)
blocksize = self.blocksize
if len(key) > blocksize:
key = digest_cons(key).digest()
# self.blocksize is the default blocksize. self.block_size is
# effective block size as well as the public API attribute.
self.block_size = blocksize
key = key.ljust(blocksize, b'\0')
self._outer.update(key.translate(trans_5C))
self._inner.update(key.translate(trans_36))
if msg is not None:
self.update(msg)
@property
def name(self):
if self._hmac:
return self._hmac.name
else:
return f"hmac-{self._inner.name}"
def update(self, msg):
"""Feed data from msg into this hashing object."""
inst = self._hmac or self._inner
inst.update(msg)
def copy(self):
"""Return a separate copy of this hashing object.
An update to this copy won't affect the original object.
"""
# Call __new__ directly to avoid the expensive __init__.
other = self.__class__.__new__(self.__class__)
other.digest_size = self.digest_size
if self._hmac:
other._hmac = self._hmac.copy()
other._inner = other._outer = None
else:
other._hmac = None
other._inner = self._inner.copy()
other._outer = self._outer.copy()
return other
def _current(self):
"""Return a hash object for the current state.
To be used only internally with digest() and hexdigest().
"""
if self._hmac:
return self._hmac
else:
h = self._outer.copy()
h.update(self._inner.digest())
return h
def digest(self):
"""Return the hash value of this hashing object.
This returns the hmac value as bytes. The object is
not altered in any way by this function; you can continue
updating the object after calling this function.
"""
h = self._current()
return h.digest()
def hexdigest(self):
"""Like digest(), but returns a string of hexadecimal digits instead.
"""
h = self._current()
return h.hexdigest()
def new(key, msg=None, digestmod=_hashlib.sha256):
"""Create a new hashing object and return it.
key: bytes or buffer, The starting key for the hash.
msg: bytes or buffer, Initial input for the hash, or None.
digestmod: A hash name suitable for hashlib.new(). *OR*
A hashlib constructor returning a new hash object. *OR*
A module supporting PEP 247.
Required as of 3.8, despite its position after the optional
msg argument. Passing it as a keyword argument is
recommended, though not required for legacy API reasons.
You can now feed arbitrary bytes into the object using its update()
method, and can ask for the hash value at any time by calling its digest()
or hexdigest() methods.
"""
return HMAC(key, msg, digestmod)
def digest(key, msg, digest):
"""Fast inline implementation of HMAC.
key: bytes or buffer, The key for the keyed hash object.
msg: bytes or buffer, Input message.
digest: A hash name suitable for hashlib.new() for best performance. *OR*
A hashlib constructor returning a new hash object. *OR*
A module supporting PEP 247.
"""
if callable(digest):
digest_cons = digest
elif isinstance(digest, str):
digest_cons = lambda d=b'': _hashlib.new(digest, d)
else:
digest_cons = lambda d=b'': digest.new(d)
inner = digest_cons()
outer = digest_cons()
blocksize = getattr(inner, 'block_size', 64)
if len(key) > blocksize:
key = digest_cons(key).digest()
key = key + b'\x00' * (blocksize - len(key))
inner.update(key.translate(trans_36))
outer.update(key.translate(trans_5C))
inner.update(msg)
outer.update(inner.digest())
return outer.digest()

View File

@ -0,0 +1,34 @@
import importlib
if importlib.util.find_spec('hashlib') != None:
import hashlib
else:
hashlib = None
if hasattr(hashlib, "sha512"):
from hashlib import sha512 as ext_sha512
else:
from .SHA512 import sha512 as ext_sha512
if hasattr(hashlib, "sha256"):
from hashlib import sha256 as ext_sha256
else:
from .SHA256 import sha256 as ext_sha256
"""
The SHA primitives are abstracted here to allow platform-
aware hardware acceleration in the future. Currently only
uses Python's internal SHA-256 implementation. All SHA-256
calls in RNS end up here.
"""
def sha256(data):
digest = ext_sha256()
digest.update(data)
return digest.digest()
def sha512(data):
digest = ext_sha512()
digest.update(data)
return digest.digest()

40
RNS/Cryptography/PKCS7.py Normal file
View File

@ -0,0 +1,40 @@
# MIT License
#
# Copyright (c) 2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
class PKCS7:
BLOCKSIZE = 16
@staticmethod
def pad(data, bs=BLOCKSIZE):
l = len(data)
n = bs-l%bs
v = bytes([n])
return data+v*n
@staticmethod
def unpad(data, bs=BLOCKSIZE):
l = len(data)
n = data[-1]
if n > bs:
raise ValueError("Cannot unpad, invalid padding length of "+str(n)+" bytes")
else:
return data[:l-n]

View File

@ -0,0 +1,38 @@
import importlib
PROVIDER_NONE = 0x00
PROVIDER_INTERNAL = 0x01
PROVIDER_PYCA = 0x02
PROVIDER = PROVIDER_NONE
pyca_v = None
use_pyca = False
try:
if importlib.util.find_spec('cryptography') != None:
import cryptography
pyca_v = cryptography.__version__
v = pyca_v.split(".")
if int(v[0]) == 2:
if int(v[1]) >= 8:
use_pyca = True
elif int(v[0]) >= 3:
use_pyca = True
except Exception as e:
pass
if use_pyca:
PROVIDER = PROVIDER_PYCA
else:
PROVIDER = PROVIDER_INTERNAL
def backend():
if PROVIDER == PROVIDER_NONE:
return "none"
elif PROVIDER == PROVIDER_INTERNAL:
return "internal"
elif PROVIDER == PROVIDER_PYCA:
return "openssl, PyCA "+str(pyca_v)

View File

@ -0,0 +1,90 @@
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey, Ed25519PublicKey
from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey, X25519PublicKey
# These proxy classes exist to create a uniform API accross
# cryptography primitive providers.
class X25519PrivateKeyProxy:
def __init__(self, real):
self.real = real
@classmethod
def generate(cls):
return cls(X25519PrivateKey.generate())
@classmethod
def from_private_bytes(cls, data):
return cls(X25519PrivateKey.from_private_bytes(data))
def private_bytes(self):
return self.real.private_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PrivateFormat.Raw,
encryption_algorithm=serialization.NoEncryption(),
)
def public_key(self):
return X25519PublicKeyProxy(self.real.public_key())
def exchange(self, peer_public_key):
return self.real.exchange(peer_public_key.real)
class X25519PublicKeyProxy:
def __init__(self, real):
self.real = real
@classmethod
def from_public_bytes(cls, data):
return cls(X25519PublicKey.from_public_bytes(data))
def public_bytes(self):
return self.real.public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
class Ed25519PrivateKeyProxy:
def __init__(self, real):
self.real = real
@classmethod
def generate(cls):
return cls(Ed25519PrivateKey.generate())
@classmethod
def from_private_bytes(cls, data):
return cls(Ed25519PrivateKey.from_private_bytes(data))
def private_bytes(self):
return self.real.private_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PrivateFormat.Raw,
encryption_algorithm=serialization.NoEncryption()
)
def public_key(self):
return Ed25519PublicKeyProxy(self.real.public_key())
def sign(self, message):
return self.real.sign(message)
class Ed25519PublicKeyProxy:
def __init__(self, real):
self.real = real
@classmethod
def from_public_bytes(cls, data):
return cls(Ed25519PublicKey.from_public_bytes(data))
def public_bytes(self):
return self.real.public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
def verify(self, signature, message):
self.real.verify(signature, message)

129
RNS/Cryptography/SHA256.py Normal file
View File

@ -0,0 +1,129 @@
# MIT License
#
# Copyright (c) 2017 Thomas Dixon
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import copy
import struct
import sys
def new(m=None):
return sha256(m)
class sha256(object):
_k = (0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2)
_h = (0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19)
_output_size = 8
blocksize = 1
block_size = 64
digest_size = 32
def __init__(self, m=None):
self._buffer = b""
self._counter = 0
if m is not None:
if type(m) is not bytes:
raise TypeError('%s() argument 1 must be bytes, not %s' % (self.__class__.__name__, type(m).__name__))
self.update(m)
def _rotr(self, x, y):
return ((x >> y) | (x << (32-y))) & 0xFFFFFFFF
def _sha256_process(self, c):
w = [0]*64
w[0:16] = struct.unpack('!16L', c)
for i in range(16, 64):
s0 = self._rotr(w[i-15], 7) ^ self._rotr(w[i-15], 18) ^ (w[i-15] >> 3)
s1 = self._rotr(w[i-2], 17) ^ self._rotr(w[i-2], 19) ^ (w[i-2] >> 10)
w[i] = (w[i-16] + s0 + w[i-7] + s1) & 0xFFFFFFFF
a,b,c,d,e,f,g,h = self._h
for i in range(64):
s0 = self._rotr(a, 2) ^ self._rotr(a, 13) ^ self._rotr(a, 22)
maj = (a & b) ^ (a & c) ^ (b & c)
t2 = s0 + maj
s1 = self._rotr(e, 6) ^ self._rotr(e, 11) ^ self._rotr(e, 25)
ch = (e & f) ^ ((~e) & g)
t1 = h + s1 + ch + self._k[i] + w[i]
h = g
g = f
f = e
e = (d + t1) & 0xFFFFFFFF
d = c
c = b
b = a
a = (t1 + t2) & 0xFFFFFFFF
self._h = [(x+y) & 0xFFFFFFFF for x,y in zip(self._h, [a,b,c,d,e,f,g,h])]
def update(self, m):
if not m:
return
if type(m) is not bytes:
raise TypeError('%s() argument 1 must be bytes, not %s' % (sys._getframe().f_code.co_name, type(m).__name__))
self._buffer += m
self._counter += len(m)
while len(self._buffer) >= 64:
self._sha256_process(self._buffer[:64])
self._buffer = self._buffer[64:]
def digest(self):
mdi = self._counter & 0x3F
length = struct.pack('!Q', self._counter<<3)
if mdi < 56:
padlen = 55-mdi
else:
padlen = 119-mdi
r = self.copy()
r.update(b'\x80'+(b'\x00'*padlen)+length)
return b''.join([struct.pack('!L', i) for i in r._h[:self._output_size]])
def hexdigest(self):
return self.digest().encode('hex')
def copy(self):
return copy.deepcopy(self)

129
RNS/Cryptography/SHA512.py Normal file
View File

@ -0,0 +1,129 @@
# MIT License
#
# Copyright (c) 2017 Thomas Dixon
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import copy, struct, sys
def new(m=None):
return sha512(m)
class sha512(object):
_k = (0x428a2f98d728ae22, 0x7137449123ef65cd, 0xb5c0fbcfec4d3b2f, 0xe9b5dba58189dbbc,
0x3956c25bf348b538, 0x59f111f1b605d019, 0x923f82a4af194f9b, 0xab1c5ed5da6d8118,
0xd807aa98a3030242, 0x12835b0145706fbe, 0x243185be4ee4b28c, 0x550c7dc3d5ffb4e2,
0x72be5d74f27b896f, 0x80deb1fe3b1696b1, 0x9bdc06a725c71235, 0xc19bf174cf692694,
0xe49b69c19ef14ad2, 0xefbe4786384f25e3, 0x0fc19dc68b8cd5b5, 0x240ca1cc77ac9c65,
0x2de92c6f592b0275, 0x4a7484aa6ea6e483, 0x5cb0a9dcbd41fbd4, 0x76f988da831153b5,
0x983e5152ee66dfab, 0xa831c66d2db43210, 0xb00327c898fb213f, 0xbf597fc7beef0ee4,
0xc6e00bf33da88fc2, 0xd5a79147930aa725, 0x06ca6351e003826f, 0x142929670a0e6e70,
0x27b70a8546d22ffc, 0x2e1b21385c26c926, 0x4d2c6dfc5ac42aed, 0x53380d139d95b3df,
0x650a73548baf63de, 0x766a0abb3c77b2a8, 0x81c2c92e47edaee6, 0x92722c851482353b,
0xa2bfe8a14cf10364, 0xa81a664bbc423001, 0xc24b8b70d0f89791, 0xc76c51a30654be30,
0xd192e819d6ef5218, 0xd69906245565a910, 0xf40e35855771202a, 0x106aa07032bbd1b8,
0x19a4c116b8d2d0c8, 0x1e376c085141ab53, 0x2748774cdf8eeb99, 0x34b0bcb5e19b48a8,
0x391c0cb3c5c95a63, 0x4ed8aa4ae3418acb, 0x5b9cca4f7763e373, 0x682e6ff3d6b2b8a3,
0x748f82ee5defb2fc, 0x78a5636f43172f60, 0x84c87814a1f0ab72, 0x8cc702081a6439ec,
0x90befffa23631e28, 0xa4506cebde82bde9, 0xbef9a3f7b2c67915, 0xc67178f2e372532b,
0xca273eceea26619c, 0xd186b8c721c0c207, 0xeada7dd6cde0eb1e, 0xf57d4f7fee6ed178,
0x06f067aa72176fba, 0x0a637dc5a2c898a6, 0x113f9804bef90dae, 0x1b710b35131c471b,
0x28db77f523047d84, 0x32caab7b40c72493, 0x3c9ebe0a15c9bebc, 0x431d67c49c100d4c,
0x4cc5d4becb3e42b6, 0x597f299cfc657e2a, 0x5fcb6fab3ad6faec, 0x6c44198c4a475817)
_h = (0x6a09e667f3bcc908, 0xbb67ae8584caa73b, 0x3c6ef372fe94f82b, 0xa54ff53a5f1d36f1,
0x510e527fade682d1, 0x9b05688c2b3e6c1f, 0x1f83d9abfb41bd6b, 0x5be0cd19137e2179)
_output_size = 8
blocksize = 1
block_size = 128
digest_size = 64
def __init__(self, m=None):
self._buffer = b''
self._counter = 0
if m is not None:
if type(m) is not bytes:
raise TypeError('%s() argument 1 must be bytes, not %s' % (self.__class__.__name__, type(m).__name__))
self.update(m)
def _rotr(self, x, y):
return ((x >> y) | (x << (64-y))) & 0xFFFFFFFFFFFFFFFF
def _sha512_process(self, chunk):
w = [0]*80
w[0:16] = struct.unpack('!16Q', chunk)
for i in range(16, 80):
s0 = self._rotr(w[i-15], 1) ^ self._rotr(w[i-15], 8) ^ (w[i-15] >> 7)
s1 = self._rotr(w[i-2], 19) ^ self._rotr(w[i-2], 61) ^ (w[i-2] >> 6)
w[i] = (w[i-16] + s0 + w[i-7] + s1) & 0xFFFFFFFFFFFFFFFF
a,b,c,d,e,f,g,h = self._h
for i in range(80):
s0 = self._rotr(a, 28) ^ self._rotr(a, 34) ^ self._rotr(a, 39)
maj = (a & b) ^ (a & c) ^ (b & c)
t2 = s0 + maj
s1 = self._rotr(e, 14) ^ self._rotr(e, 18) ^ self._rotr(e, 41)
ch = (e & f) ^ ((~e) & g)
t1 = h + s1 + ch + self._k[i] + w[i]
h = g
g = f
f = e
e = (d + t1) & 0xFFFFFFFFFFFFFFFF
d = c
c = b
b = a
a = (t1 + t2) & 0xFFFFFFFFFFFFFFFF
self._h = [(x+y) & 0xFFFFFFFFFFFFFFFF for x,y in zip(self._h, [a,b,c,d,e,f,g,h])]
def update(self, m):
if not m:
return
if type(m) is not bytes:
raise TypeError('%s() argument 1 must be bytes, not %s' % (sys._getframe().f_code.co_name, type(m).__name__))
self._buffer += m
self._counter += len(m)
while len(self._buffer) >= 128:
self._sha512_process(self._buffer[:128])
self._buffer = self._buffer[128:]
def digest(self):
mdi = self._counter & 0x7F
length = struct.pack('!Q', self._counter<<3)
if mdi < 112:
padlen = 111-mdi
else:
padlen = 239-mdi
r = self.copy()
r.update(b'\x80'+(b'\x00'*(padlen+8))+length)
return b''.join([struct.pack('!Q', i) for i in r._h[:self._output_size]])
def hexdigest(self):
return self.digest().encode('hex')
def copy(self):
return copy.deepcopy(self)

171
RNS/Cryptography/X25519.py Normal file
View File

@ -0,0 +1,171 @@
# By Nicko van Someren, 2021. This code is released into the public domain.
# Small modifications for use in Reticulum, and constant time key exchange
# added by Mark Qvist in 2022.
# WARNING! Only the X25519PrivateKey.exchange() method attempts to hide execution time.
# In the context of Reticulum, this is sufficient, but it may not be in other systems. If
# this code is to be used to provide cryptographic security in an environment where the
# start and end times of the execution can be guessed, inferred or measured then it is
# critical that steps are taken to hide the execution time, for instance by adding a
# delay so that encrypted packets are not sent until a fixed time after the _start_ of
# execution.
import os
import time
P = 2 ** 255 - 19
_A = 486662
def _point_add(point_n, point_m, point_diff):
"""Given the projection of two points and their difference, return their sum"""
(xn, zn) = point_n
(xm, zm) = point_m
(x_diff, z_diff) = point_diff
x = (z_diff << 2) * (xm * xn - zm * zn) ** 2
z = (x_diff << 2) * (xm * zn - zm * xn) ** 2
return x % P, z % P
def _point_double(point_n):
"""Double a point provided in projective coordinates"""
(xn, zn) = point_n
xn2 = xn ** 2
zn2 = zn ** 2
x = (xn2 - zn2) ** 2
xzn = xn * zn
z = 4 * xzn * (xn2 + _A * xzn + zn2)
return x % P, z % P
def _const_time_swap(a, b, swap):
"""Swap two values in constant time"""
index = int(swap) * 2
temp = (a, b, b, a)
return temp[index:index+2]
def _raw_curve25519(base, n):
"""Raise the point base to the power n"""
zero = (1, 0)
one = (base, 1)
mP, m1P = zero, one
for i in reversed(range(256)):
bit = bool(n & (1 << i))
mP, m1P = _const_time_swap(mP, m1P, bit)
mP, m1P = _point_double(mP), _point_add(mP, m1P, one)
mP, m1P = _const_time_swap(mP, m1P, bit)
x, z = mP
inv_z = pow(z, P - 2, P)
return (x * inv_z) % P
def _unpack_number(s):
"""Unpack 32 bytes to a 256 bit value"""
if len(s) != 32:
raise ValueError('Curve25519 values must be 32 bytes')
return int.from_bytes(s, "little")
def _pack_number(n):
"""Pack a value into 32 bytes"""
return n.to_bytes(32, "little")
def _fix_secret(n):
"""Mask a value to be an acceptable exponent"""
n &= ~7
n &= ~(128 << 8 * 31)
n |= 64 << 8 * 31
return n
def curve25519(base_point_raw, secret_raw):
"""Raise the base point to a given power"""
base_point = _unpack_number(base_point_raw)
secret = _fix_secret(_unpack_number(secret_raw))
return _pack_number(_raw_curve25519(base_point, secret))
def curve25519_base(secret_raw):
"""Raise the generator point to a given power"""
secret = _fix_secret(_unpack_number(secret_raw))
return _pack_number(_raw_curve25519(9, secret))
class X25519PublicKey:
def __init__(self, x):
self.x = x
@classmethod
def from_public_bytes(cls, data):
return cls(_unpack_number(data))
def public_bytes(self):
return _pack_number(self.x)
class X25519PrivateKey:
MIN_EXEC_TIME = 0.002
MAX_EXEC_TIME = 0.5
DELAY_WINDOW = 10
T_CLEAR = None
T_MAX = 0
def __init__(self, a):
self.a = a
@classmethod
def generate(cls):
return cls.from_private_bytes(os.urandom(32))
@classmethod
def from_private_bytes(cls, data):
return cls(_fix_secret(_unpack_number(data)))
def private_bytes(self):
return _pack_number(self.a)
def public_key(self):
return X25519PublicKey.from_public_bytes(_pack_number(_raw_curve25519(9, self.a)))
def exchange(self, peer_public_key):
if isinstance(peer_public_key, bytes):
peer_public_key = X25519PublicKey.from_public_bytes(peer_public_key)
start = time.time()
shared = _pack_number(_raw_curve25519(peer_public_key.x, self.a))
end = time.time()
duration = end-start
if X25519PrivateKey.T_CLEAR == None:
X25519PrivateKey.T_CLEAR = end + X25519PrivateKey.DELAY_WINDOW
if end > X25519PrivateKey.T_CLEAR:
X25519PrivateKey.T_CLEAR = end + X25519PrivateKey.DELAY_WINDOW
X25519PrivateKey.T_MAX = 0
if duration < X25519PrivateKey.T_MAX or duration < X25519PrivateKey.MIN_EXEC_TIME:
target = start+X25519PrivateKey.T_MAX
if target > start+X25519PrivateKey.MAX_EXEC_TIME:
target = start+X25519PrivateKey.MAX_EXEC_TIME
if target < start+X25519PrivateKey.MIN_EXEC_TIME:
target = start+X25519PrivateKey.MIN_EXEC_TIME
try:
time.sleep(target-time.time())
except Exception as e:
pass
elif duration > X25519PrivateKey.T_MAX:
X25519PrivateKey.T_MAX = duration
return shared

View File

@ -0,0 +1,24 @@
import os
import glob
from .Hashes import sha256
from .Hashes import sha512
from .HKDF import hkdf
from .PKCS7 import PKCS7
from .Fernet import Fernet
from .Provider import backend
import RNS.Cryptography.Provider as cp
if cp.PROVIDER == cp.PROVIDER_INTERNAL:
from RNS.Cryptography.X25519 import X25519PrivateKey, X25519PublicKey
from RNS.Cryptography.Ed25519 import Ed25519PrivateKey, Ed25519PublicKey
elif cp.PROVIDER == cp.PROVIDER_PYCA:
from RNS.Cryptography.Proxies import X25519PrivateKeyProxy as X25519PrivateKey
from RNS.Cryptography.Proxies import X25519PublicKeyProxy as X25519PublicKey
from RNS.Cryptography.Proxies import Ed25519PrivateKeyProxy as Ed25519PrivateKey
from RNS.Cryptography.Proxies import Ed25519PublicKeyProxy as Ed25519PublicKey
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]

View File

@ -0,0 +1 @@
from .aes import AES

271
RNS/Cryptography/aes/aes.py Normal file
View File

@ -0,0 +1,271 @@
# MIT License
# Copyright (c) 2021 Or Gur Arie
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from .utils import *
class AES:
# AES-128 block size
block_size = 16
# AES-128 encrypts messages with 10 rounds
_rounds = 10
# initiate the AES objecy
def __init__(self, key):
"""
Initializes the object with a given key.
"""
# make sure key length is right
assert len(key) == AES.block_size
# ExpandKey
self._round_keys = self._expand_key(key)
# will perform the AES ExpandKey phase
def _expand_key(self, master_key):
"""
Expands and returns a list of key matrices for the given master_key.
"""
# Initialize round keys with raw key material.
key_columns = bytes2matrix(master_key)
iteration_size = len(master_key) // 4
# Each iteration has exactly as many columns as the key material.
i = 1
while len(key_columns) < (self._rounds + 1) * 4:
# Copy previous word.
word = list(key_columns[-1])
# Perform schedule_core once every "row".
if len(key_columns) % iteration_size == 0:
# Circular shift.
word.append(word.pop(0))
# Map to S-BOX.
word = [s_box[b] for b in word]
# XOR with first byte of R-CON, since the others bytes of R-CON are 0.
word[0] ^= r_con[i]
i += 1
elif len(master_key) == 32 and len(key_columns) % iteration_size == 4:
# Run word through S-box in the fourth iteration when using a
# 256-bit key.
word = [s_box[b] for b in word]
# XOR with equivalent word from previous iteration.
word = bytes(i^j for i, j in zip(word, key_columns[-iteration_size]))
key_columns.append(word)
# Group key words in 4x4 byte matrices.
return [key_columns[4*i : 4*(i+1)] for i in range(len(key_columns) // 4)]
# encrypt a single block of data with AES
def _encrypt_block(self, plaintext):
"""
Encrypts a single block of 16 byte long plaintext.
"""
# length of a single block
assert len(plaintext) == AES.block_size
# perform on a matrix
state = bytes2matrix(plaintext)
# AddRoundKey
add_round_key(state, self._round_keys[0])
# 9 main rounds
for i in range(1, self._rounds):
# SubBytes
sub_bytes(state)
# ShiftRows
shift_rows(state)
# MixCols
mix_columns(state)
# AddRoundKey
add_round_key(state, self._round_keys[i])
# last round, w/t AddRoundKey step
sub_bytes(state)
shift_rows(state)
add_round_key(state, self._round_keys[-1])
# return the encrypted matrix as bytes
return matrix2bytes(state)
# decrypt a single block of data with AES
def _decrypt_block(self, ciphertext):
"""
Decrypts a single block of 16 byte long ciphertext.
"""
# length of a single block
assert len(ciphertext) == AES.block_size
# perform on a matrix
state = bytes2matrix(ciphertext)
# in reverse order, last round is first
add_round_key(state, self._round_keys[-1])
inv_shift_rows(state)
inv_sub_bytes(state)
for i in range(self._rounds - 1, 0, -1):
# nain rounds
add_round_key(state, self._round_keys[i])
inv_mix_columns(state)
inv_shift_rows(state)
inv_sub_bytes(state)
# initial AddRoundKey phase
add_round_key(state, self._round_keys[0])
# return bytes
return matrix2bytes(state)
# will encrypt the entire data
def encrypt(self, plaintext, iv):
"""
Encrypts `plaintext` using CBC mode and PKCS#7 padding, with the given
initialization vector (iv).
"""
# iv length must be same as block size
assert len(iv) == AES.block_size
assert len(plaintext) % AES.block_size == 0
ciphertext_blocks = []
previous = iv
for plaintext_block in split_blocks(plaintext):
# in CBC mode every block is XOR'd with the previous block
xorred = xor_bytes(plaintext_block, previous)
# encrypt current block
block = self._encrypt_block(xorred)
previous = block
# append to ciphertext
ciphertext_blocks.append(block)
# return as bytes
return b''.join(ciphertext_blocks)
# will decrypt the entire data
def decrypt(self, ciphertext, iv):
"""
Decrypts `ciphertext` using CBC mode and PKCS#7 padding, with the given
initialization vector (iv).
"""
# iv length must be same as block size
assert len(iv) == AES.block_size
plaintext_blocks = []
previous = iv
for ciphertext_block in split_blocks(ciphertext):
# in CBC mode every block is XOR'd with the previous block
xorred = xor_bytes(previous, self._decrypt_block(ciphertext_block))
# append plaintext
plaintext_blocks.append(xorred)
previous = ciphertext_block
return b''.join(plaintext_blocks)
def test():
# modules and classes requiered for test only
import os
class bcolors:
OK = '\033[92m' #GREEN
WARNING = '\033[93m' #YELLOW
FAIL = '\033[91m' #RED
RESET = '\033[0m' #RESET COLOR
# will test AES class by performing an encryption / decryption
print("AES Tests")
print("=========")
# generate a secret key and print details
key = os.urandom(AES.block_size)
_aes = AES(key)
print(f"Algorithm: AES-CBC-{AES.block_size*8}")
print(f"Secret Key: {key.hex()}")
print()
# test single block encryption / decryption
iv = os.urandom(AES.block_size)
single_block_text = b"SingleBlock Text"
print("Single Block Tests")
print("------------------")
print(f"iv: {iv.hex()}")
print(f"plain text: '{single_block_text.decode()}'")
ciphertext_block = _aes._encrypt_block(single_block_text)
plaintext_block = _aes._decrypt_block(ciphertext_block)
print(f"Ciphertext Hex: {ciphertext_block.hex()}")
print(f"Plaintext: {plaintext_block.decode()}")
assert plaintext_block == single_block_text
print(bcolors.OK + "Single Block Test Passed Successfully" + bcolors.RESET)
print()
# test a less than a block length phrase
iv = os.urandom(AES.block_size)
short_text = b"Just Text"
print("Short Text Tests")
print("----------------")
print(f"iv: {iv.hex()}")
print(f"plain text: '{short_text.decode()}'")
ciphertext_short = _aes.encrypt(short_text, iv)
plaintext_short = _aes.decrypt(ciphertext_short, iv)
print(f"Ciphertext Hex: {ciphertext_short.hex()}")
print(f"Plaintext: {plaintext_short.decode()}")
assert short_text == plaintext_short
print(bcolors.OK + "Short Text Test Passed Successfully" + bcolors.RESET)
print()
# test an arbitrary length phrase
iv = os.urandom(AES.block_size)
text = b"This Text is longer than one block"
print("Arbitrary Length Tests")
print("----------------------")
print(f"iv: {iv.hex()}")
print(f"plain text: '{text.decode()}'")
ciphertext = _aes.encrypt(text, iv)
plaintext = _aes.decrypt(ciphertext, iv)
print(f"Ciphertext Hex: {ciphertext.hex()}")
print(f"Plaintext: {plaintext.decode()}")
assert text == plaintext
print(bcolors.OK + "Arbitrary Length Text Test Passed Successfully" + bcolors.RESET)
print()
if __name__ == "__main__":
# test AES class
test()

View File

@ -0,0 +1,159 @@
# MIT License
# Copyright (c) 2021 Or Gur Arie
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
'''
Utils class for AES encryption / decryption
'''
## AES lookup tables
# resource: https://en.wikipedia.org/wiki/Rijndael_S-box
s_box = (
0x63, 0x7C, 0x77, 0x7B, 0xF2, 0x6B, 0x6F, 0xC5, 0x30, 0x01, 0x67, 0x2B, 0xFE, 0xD7, 0xAB, 0x76,
0xCA, 0x82, 0xC9, 0x7D, 0xFA, 0x59, 0x47, 0xF0, 0xAD, 0xD4, 0xA2, 0xAF, 0x9C, 0xA4, 0x72, 0xC0,
0xB7, 0xFD, 0x93, 0x26, 0x36, 0x3F, 0xF7, 0xCC, 0x34, 0xA5, 0xE5, 0xF1, 0x71, 0xD8, 0x31, 0x15,
0x04, 0xC7, 0x23, 0xC3, 0x18, 0x96, 0x05, 0x9A, 0x07, 0x12, 0x80, 0xE2, 0xEB, 0x27, 0xB2, 0x75,
0x09, 0x83, 0x2C, 0x1A, 0x1B, 0x6E, 0x5A, 0xA0, 0x52, 0x3B, 0xD6, 0xB3, 0x29, 0xE3, 0x2F, 0x84,
0x53, 0xD1, 0x00, 0xED, 0x20, 0xFC, 0xB1, 0x5B, 0x6A, 0xCB, 0xBE, 0x39, 0x4A, 0x4C, 0x58, 0xCF,
0xD0, 0xEF, 0xAA, 0xFB, 0x43, 0x4D, 0x33, 0x85, 0x45, 0xF9, 0x02, 0x7F, 0x50, 0x3C, 0x9F, 0xA8,
0x51, 0xA3, 0x40, 0x8F, 0x92, 0x9D, 0x38, 0xF5, 0xBC, 0xB6, 0xDA, 0x21, 0x10, 0xFF, 0xF3, 0xD2,
0xCD, 0x0C, 0x13, 0xEC, 0x5F, 0x97, 0x44, 0x17, 0xC4, 0xA7, 0x7E, 0x3D, 0x64, 0x5D, 0x19, 0x73,
0x60, 0x81, 0x4F, 0xDC, 0x22, 0x2A, 0x90, 0x88, 0x46, 0xEE, 0xB8, 0x14, 0xDE, 0x5E, 0x0B, 0xDB,
0xE0, 0x32, 0x3A, 0x0A, 0x49, 0x06, 0x24, 0x5C, 0xC2, 0xD3, 0xAC, 0x62, 0x91, 0x95, 0xE4, 0x79,
0xE7, 0xC8, 0x37, 0x6D, 0x8D, 0xD5, 0x4E, 0xA9, 0x6C, 0x56, 0xF4, 0xEA, 0x65, 0x7A, 0xAE, 0x08,
0xBA, 0x78, 0x25, 0x2E, 0x1C, 0xA6, 0xB4, 0xC6, 0xE8, 0xDD, 0x74, 0x1F, 0x4B, 0xBD, 0x8B, 0x8A,
0x70, 0x3E, 0xB5, 0x66, 0x48, 0x03, 0xF6, 0x0E, 0x61, 0x35, 0x57, 0xB9, 0x86, 0xC1, 0x1D, 0x9E,
0xE1, 0xF8, 0x98, 0x11, 0x69, 0xD9, 0x8E, 0x94, 0x9B, 0x1E, 0x87, 0xE9, 0xCE, 0x55, 0x28, 0xDF,
0x8C, 0xA1, 0x89, 0x0D, 0xBF, 0xE6, 0x42, 0x68, 0x41, 0x99, 0x2D, 0x0F, 0xB0, 0x54, 0xBB, 0x16,
)
inv_s_box = (
0x52, 0x09, 0x6A, 0xD5, 0x30, 0x36, 0xA5, 0x38, 0xBF, 0x40, 0xA3, 0x9E, 0x81, 0xF3, 0xD7, 0xFB,
0x7C, 0xE3, 0x39, 0x82, 0x9B, 0x2F, 0xFF, 0x87, 0x34, 0x8E, 0x43, 0x44, 0xC4, 0xDE, 0xE9, 0xCB,
0x54, 0x7B, 0x94, 0x32, 0xA6, 0xC2, 0x23, 0x3D, 0xEE, 0x4C, 0x95, 0x0B, 0x42, 0xFA, 0xC3, 0x4E,
0x08, 0x2E, 0xA1, 0x66, 0x28, 0xD9, 0x24, 0xB2, 0x76, 0x5B, 0xA2, 0x49, 0x6D, 0x8B, 0xD1, 0x25,
0x72, 0xF8, 0xF6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xD4, 0xA4, 0x5C, 0xCC, 0x5D, 0x65, 0xB6, 0x92,
0x6C, 0x70, 0x48, 0x50, 0xFD, 0xED, 0xB9, 0xDA, 0x5E, 0x15, 0x46, 0x57, 0xA7, 0x8D, 0x9D, 0x84,
0x90, 0xD8, 0xAB, 0x00, 0x8C, 0xBC, 0xD3, 0x0A, 0xF7, 0xE4, 0x58, 0x05, 0xB8, 0xB3, 0x45, 0x06,
0xD0, 0x2C, 0x1E, 0x8F, 0xCA, 0x3F, 0x0F, 0x02, 0xC1, 0xAF, 0xBD, 0x03, 0x01, 0x13, 0x8A, 0x6B,
0x3A, 0x91, 0x11, 0x41, 0x4F, 0x67, 0xDC, 0xEA, 0x97, 0xF2, 0xCF, 0xCE, 0xF0, 0xB4, 0xE6, 0x73,
0x96, 0xAC, 0x74, 0x22, 0xE7, 0xAD, 0x35, 0x85, 0xE2, 0xF9, 0x37, 0xE8, 0x1C, 0x75, 0xDF, 0x6E,
0x47, 0xF1, 0x1A, 0x71, 0x1D, 0x29, 0xC5, 0x89, 0x6F, 0xB7, 0x62, 0x0E, 0xAA, 0x18, 0xBE, 0x1B,
0xFC, 0x56, 0x3E, 0x4B, 0xC6, 0xD2, 0x79, 0x20, 0x9A, 0xDB, 0xC0, 0xFE, 0x78, 0xCD, 0x5A, 0xF4,
0x1F, 0xDD, 0xA8, 0x33, 0x88, 0x07, 0xC7, 0x31, 0xB1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xEC, 0x5F,
0x60, 0x51, 0x7F, 0xA9, 0x19, 0xB5, 0x4A, 0x0D, 0x2D, 0xE5, 0x7A, 0x9F, 0x93, 0xC9, 0x9C, 0xEF,
0xA0, 0xE0, 0x3B, 0x4D, 0xAE, 0x2A, 0xF5, 0xB0, 0xC8, 0xEB, 0xBB, 0x3C, 0x83, 0x53, 0x99, 0x61,
0x17, 0x2B, 0x04, 0x7E, 0xBA, 0x77, 0xD6, 0x26, 0xE1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0C, 0x7D,
)
## AES AddRoundKey
# Round constants https://en.wikipedia.org/wiki/AES_key_schedule#Round_constants
r_con = (
0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40,
0x80, 0x1B, 0x36, 0x6C, 0xD8, 0xAB, 0x4D, 0x9A,
0x2F, 0x5E, 0xBC, 0x63, 0xC6, 0x97, 0x35, 0x6A,
0xD4, 0xB3, 0x7D, 0xFA, 0xEF, 0xC5, 0x91, 0x39,
)
def add_round_key(s, k):
for i in range(4):
for j in range(4):
s[i][j] ^= k[i][j]
## AES SubBytes
def sub_bytes(s):
for i in range(4):
for j in range(4):
s[i][j] = s_box[s[i][j]]
def inv_sub_bytes(s):
for i in range(4):
for j in range(4):
s[i][j] = inv_s_box[s[i][j]]
## AES ShiftRows
def shift_rows(s):
s[0][1], s[1][1], s[2][1], s[3][1] = s[1][1], s[2][1], s[3][1], s[0][1]
s[0][2], s[1][2], s[2][2], s[3][2] = s[2][2], s[3][2], s[0][2], s[1][2]
s[0][3], s[1][3], s[2][3], s[3][3] = s[3][3], s[0][3], s[1][3], s[2][3]
def inv_shift_rows(s):
s[0][1], s[1][1], s[2][1], s[3][1] = s[3][1], s[0][1], s[1][1], s[2][1]
s[0][2], s[1][2], s[2][2], s[3][2] = s[2][2], s[3][2], s[0][2], s[1][2]
s[0][3], s[1][3], s[2][3], s[3][3] = s[1][3], s[2][3], s[3][3], s[0][3]
## AES MixColumns
# learned from http://cs.ucsb.edu/~koc/cs178/projects/JT/aes.c
xtime = lambda a: (((a << 1) ^ 0x1B) & 0xFF) if (a & 0x80) else (a << 1)
def mix_single_column(a):
# see Sec 4.1.2 in The Design of Rijndael
t = a[0] ^ a[1] ^ a[2] ^ a[3]
u = a[0]
a[0] ^= t ^ xtime(a[0] ^ a[1])
a[1] ^= t ^ xtime(a[1] ^ a[2])
a[2] ^= t ^ xtime(a[2] ^ a[3])
a[3] ^= t ^ xtime(a[3] ^ u)
def mix_columns(s):
for i in range(4):
mix_single_column(s[i])
def inv_mix_columns(s):
# see Sec 4.1.3 in The Design of Rijndael
for i in range(4):
u = xtime(xtime(s[i][0] ^ s[i][2]))
v = xtime(xtime(s[i][1] ^ s[i][3]))
s[i][0] ^= u
s[i][1] ^= v
s[i][2] ^= u
s[i][3] ^= v
mix_columns(s)
## AES Bytes
def bytes2matrix(text):
""" Converts a 16-byte array into a 4x4 matrix. """
return [list(text[i:i+4]) for i in range(0, len(text), 4)]
def matrix2bytes(matrix):
""" Converts a 4x4 matrix into a 16-byte array. """
return bytes(sum(matrix, []))
def xor_bytes(a, b):
""" Returns a new byte array with the elements xor'ed. """
return bytes(i^j for i, j in zip(a, b))
def split_blocks(message, block_size=16, require_padding=True):
assert len(message) % block_size == 0 or not require_padding
return [message[i:i+16] for i in range(0, len(message), block_size)]

View File

View File

@ -0,0 +1,58 @@
# MIT License
#
# Copyright (c) 2015 Brian Warner and other contributors
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from . import eddsa
class BadSignatureError(Exception):
pass
SECRETKEYBYTES = 64
PUBLICKEYBYTES = 32
SIGNATUREKEYBYTES = 64
def publickey(seed32):
assert len(seed32) == 32
vk32 = eddsa.publickey(seed32)
return vk32, seed32+vk32
def sign(msg, skvk):
assert len(skvk) == 64
sk = skvk[:32]
vk = skvk[32:]
sig = eddsa.signature(msg, sk, vk)
return sig+msg
def open(sigmsg, vk):
assert len(vk) == 32
sig = sigmsg[:64]
msg = sigmsg[64:]
try:
valid = eddsa.checkvalid(sig, msg, vk)
except ValueError as e:
raise BadSignatureError(e)
except Exception as e:
if str(e) == "decoding point that is not on curve":
raise BadSignatureError(e)
raise
if not valid:
raise BadSignatureError()
return msg

View File

@ -0,0 +1,368 @@
# MIT License
#
# Copyright (c) 2015 Brian Warner and other contributors
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import binascii, hashlib, itertools
Q = 2**255 - 19
L = 2**252 + 27742317777372353535851937790883648493
def inv(x):
return pow(x, Q-2, Q)
d = -121665 * inv(121666)
I = pow(2,(Q-1)//4,Q)
def xrecover(y):
xx = (y*y-1) * inv(d*y*y+1)
x = pow(xx,(Q+3)//8,Q)
if (x*x - xx) % Q != 0: x = (x*I) % Q
if x % 2 != 0: x = Q-x
return x
By = 4 * inv(5)
Bx = xrecover(By)
B = [Bx % Q,By % Q]
# Extended Coordinates: x=X/Z, y=Y/Z, x*y=T/Z
# http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1.html
def xform_affine_to_extended(pt):
(x, y) = pt
return (x%Q, y%Q, 1, (x*y)%Q) # (X,Y,Z,T)
def xform_extended_to_affine(pt):
(x, y, z, _) = pt
return ((x*inv(z))%Q, (y*inv(z))%Q)
def double_element(pt): # extended->extended
# dbl-2008-hwcd
(X1, Y1, Z1, _) = pt
A = (X1*X1)
B = (Y1*Y1)
C = (2*Z1*Z1)
D = (-A) % Q
J = (X1+Y1) % Q
E = (J*J-A-B) % Q
G = (D+B) % Q
F = (G-C) % Q
H = (D-B) % Q
X3 = (E*F) % Q
Y3 = (G*H) % Q
Z3 = (F*G) % Q
T3 = (E*H) % Q
return (X3, Y3, Z3, T3)
def add_elements(pt1, pt2): # extended->extended
# add-2008-hwcd-3 . Slightly slower than add-2008-hwcd-4, but -3 is
# unified, so it's safe for general-purpose addition
(X1, Y1, Z1, T1) = pt1
(X2, Y2, Z2, T2) = pt2
A = ((Y1-X1)*(Y2-X2)) % Q
B = ((Y1+X1)*(Y2+X2)) % Q
C = T1*(2*d)*T2 % Q
D = Z1*2*Z2 % Q
E = (B-A) % Q
F = (D-C) % Q
G = (D+C) % Q
H = (B+A) % Q
X3 = (E*F) % Q
Y3 = (G*H) % Q
T3 = (E*H) % Q
Z3 = (F*G) % Q
return (X3, Y3, Z3, T3)
def scalarmult_element_safe_slow(pt, n):
# this form is slightly slower, but tolerates arbitrary points, including
# those which are not in the main 1*L subgroup. This includes points of
# order 1 (the neutral element Zero), 2, 4, and 8.
assert n >= 0
if n==0:
return xform_affine_to_extended((0,1))
_ = double_element(scalarmult_element_safe_slow(pt, n>>1))
return add_elements(_, pt) if n&1 else _
def _add_elements_nonunfied(pt1, pt2): # extended->extended
# add-2008-hwcd-4 : NOT unified, only for pt1!=pt2. About 10% faster than
# the (unified) add-2008-hwcd-3, and safe to use inside scalarmult if you
# aren't using points of order 1/2/4/8
(X1, Y1, Z1, T1) = pt1
(X2, Y2, Z2, T2) = pt2
A = ((Y1-X1)*(Y2+X2)) % Q
B = ((Y1+X1)*(Y2-X2)) % Q
C = (Z1*2*T2) % Q
D = (T1*2*Z2) % Q
E = (D+C) % Q
F = (B-A) % Q
G = (B+A) % Q
H = (D-C) % Q
X3 = (E*F) % Q
Y3 = (G*H) % Q
Z3 = (F*G) % Q
T3 = (E*H) % Q
return (X3, Y3, Z3, T3)
def scalarmult_element(pt, n): # extended->extended
# This form only works properly when given points that are a member of
# the main 1*L subgroup. It will give incorrect answers when called with
# the points of order 1/2/4/8, including point Zero. (it will also work
# properly when given points of order 2*L/4*L/8*L)
assert n >= 0
if n==0:
return xform_affine_to_extended((0,1))
_ = double_element(scalarmult_element(pt, n>>1))
return _add_elements_nonunfied(_, pt) if n&1 else _
# points are encoded as 32-bytes little-endian, b255 is sign, b2b1b0 are 0
def encodepoint(P):
x = P[0]
y = P[1]
# MSB of output equals x.b0 (=x&1)
# rest of output is little-endian y
assert 0 <= y < (1<<255) # always < 0x7fff..ff
if x & 1:
y += 1<<255
return binascii.unhexlify("%064x" % y)[::-1]
def isoncurve(P):
x = P[0]
y = P[1]
return (-x*x + y*y - 1 - d*x*x*y*y) % Q == 0
class NotOnCurve(Exception):
pass
def decodepoint(s):
unclamped = int(binascii.hexlify(s[:32][::-1]), 16)
clamp = (1 << 255) - 1
y = unclamped & clamp # clear MSB
x = xrecover(y)
if bool(x & 1) != bool(unclamped & (1<<255)): x = Q-x
P = [x,y]
if not isoncurve(P): raise NotOnCurve("decoding point that is not on curve")
return P
# scalars are encoded as 32-bytes little-endian
def bytes_to_scalar(s):
assert len(s) == 32, len(s)
return int(binascii.hexlify(s[::-1]), 16)
def bytes_to_clamped_scalar(s):
# Ed25519 private keys clamp the scalar to ensure two things:
# 1: integer value is in L/2 .. L, to avoid small-logarithm
# non-wraparaound
# 2: low-order 3 bits are zero, so a small-subgroup attack won't learn
# any information
# set the top two bits to 01, and the bottom three to 000
a_unclamped = bytes_to_scalar(s)
AND_CLAMP = (1<<254) - 1 - 7
OR_CLAMP = (1<<254)
a_clamped = (a_unclamped & AND_CLAMP) | OR_CLAMP
return a_clamped
def random_scalar(entropy_f): # 0..L-1 inclusive
# reduce the bias to a safe level by generating 256 extra bits
oversized = int(binascii.hexlify(entropy_f(32+32)), 16)
return oversized % L
def password_to_scalar(pw):
oversized = hashlib.sha512(pw).digest()
return int(binascii.hexlify(oversized), 16) % L
def scalar_to_bytes(y):
y = y % L
assert 0 <= y < 2**256
return binascii.unhexlify("%064x" % y)[::-1]
# Elements, of various orders
def is_extended_zero(XYTZ):
# catch Zero
(X, Y, Z, T) = XYTZ
Y = Y % Q
Z = Z % Q
if X==0 and Y==Z and Y!=0:
return True
return False
class ElementOfUnknownGroup:
# This is used for points of order 2,4,8,2*L,4*L,8*L
def __init__(self, XYTZ):
assert isinstance(XYTZ, tuple)
assert len(XYTZ) == 4
self.XYTZ = XYTZ
def add(self, other):
if not isinstance(other, ElementOfUnknownGroup):
raise TypeError("elements can only be added to other elements")
sum_XYTZ = add_elements(self.XYTZ, other.XYTZ)
if is_extended_zero(sum_XYTZ):
return Zero
return ElementOfUnknownGroup(sum_XYTZ)
def scalarmult(self, s):
if isinstance(s, ElementOfUnknownGroup):
raise TypeError("elements cannot be multiplied together")
assert s >= 0
product = scalarmult_element_safe_slow(self.XYTZ, s)
return ElementOfUnknownGroup(product)
def to_bytes(self):
return encodepoint(xform_extended_to_affine(self.XYTZ))
def __eq__(self, other):
return self.to_bytes() == other.to_bytes()
def __ne__(self, other):
return not self == other
class Element(ElementOfUnknownGroup):
# this only holds elements in the main 1*L subgroup. It never holds Zero,
# or elements of order 1/2/4/8, or 2*L/4*L/8*L.
def add(self, other):
if not isinstance(other, ElementOfUnknownGroup):
raise TypeError("elements can only be added to other elements")
sum_element = ElementOfUnknownGroup.add(self, other)
if sum_element is Zero:
return sum_element
if isinstance(other, Element):
# adding two subgroup elements results in another subgroup
# element, or Zero, and we've already excluded Zero
return Element(sum_element.XYTZ)
# not necessarily a subgroup member, so assume not
return sum_element
def scalarmult(self, s):
if isinstance(s, ElementOfUnknownGroup):
raise TypeError("elements cannot be multiplied together")
# scalarmult of subgroup members can be done modulo the subgroup
# order, and using the faster non-unified function.
s = s % L
# scalarmult(s=0) gets you Zero
if s == 0:
return Zero
# scalarmult(s=1) gets you self, which is a subgroup member
# scalarmult(s<grouporder) gets you a different subgroup member
return Element(scalarmult_element(self.XYTZ, s))
# negation and subtraction only make sense for the main subgroup
def negate(self):
# slow. Prefer e.scalarmult(-pw) to e.scalarmult(pw).negate()
return Element(scalarmult_element(self.XYTZ, L-2))
def subtract(self, other):
return self.add(other.negate())
class _ZeroElement(ElementOfUnknownGroup):
def add(self, other):
return other # zero+anything = anything
def scalarmult(self, s):
return self # zero*anything = zero
def negate(self):
return self # -zero = zero
def subtract(self, other):
return self.add(other.negate())
Base = Element(xform_affine_to_extended(B))
Zero = _ZeroElement(xform_affine_to_extended((0,1))) # the neutral (identity) element
_zero_bytes = Zero.to_bytes()
def arbitrary_element(seed): # unknown DL
# TODO: if we don't need uniformity, maybe use just sha256 here?
hseed = hashlib.sha512(seed).digest()
y = int(binascii.hexlify(hseed), 16) % Q
# we try successive Y values until we find a valid point
for plus in itertools.count(0):
y_plus = (y + plus) % Q
x = xrecover(y_plus)
Pa = [x,y_plus] # no attempt to use both "positive" and "negative" X
# only about 50% of Y coordinates map to valid curve points (I think
# the other half give you points on the "twist").
if not isoncurve(Pa):
continue
P = ElementOfUnknownGroup(xform_affine_to_extended(Pa))
# even if the point is on our curve, it may not be in our particular
# (order=L) subgroup. The curve has order 8*L, so an arbitrary point
# could have order 1,2,4,8,1*L,2*L,4*L,8*L (everything which divides
# the group order).
# [I MAY BE COMPLETELY WRONG ABOUT THIS, but my brief statistical
# tests suggest it's not too far off] There are phi(x) points with
# order x, so:
# 1 element of order 1: [(x=0,y=1)=Zero]
# 1 element of order 2 [(x=0,y=-1)]
# 2 elements of order 4
# 4 elements of order 8
# L-1 elements of order L (including Base)
# L-1 elements of order 2*L
# 2*(L-1) elements of order 4*L
# 4*(L-1) elements of order 8*L
# So 50% of random points will have order 8*L, 25% will have order
# 4*L, 13% order 2*L, and 13% will have our desired order 1*L (and a
# vanishingly small fraction will have 1/2/4/8). If we multiply any
# of the 8*L points by 2, we're sure to get an 4*L point (and
# multiplying a 4*L point by 2 gives us a 2*L point, and so on).
# Multiplying a 1*L point by 2 gives us a different 1*L point. So
# multiplying by 8 gets us from almost any point into a uniform point
# on the correct 1*L subgroup.
P8 = P.scalarmult(8)
# if we got really unlucky and picked one of the 8 low-order points,
# multiplying by 8 will get us to the identity (Zero), which we check
# for explicitly.
if is_extended_zero(P8.XYTZ):
continue
# Test that we're finally in the right group. We want to scalarmult
# by L, and we want to *not* use the trick in Group.scalarmult()
# which does x%L, because that would bypass the check we care about.
# P is still an _ElementOfUnknownGroup, which doesn't use x%L because
# that's not correct for points outside the main group.
assert is_extended_zero(P8.scalarmult(L).XYTZ)
return Element(P8.XYTZ)
# never reached
def bytes_to_unknown_group_element(bytes):
# this accepts all elements, including Zero and wrong-subgroup ones
if bytes == _zero_bytes:
return Zero
XYTZ = xform_affine_to_extended(decodepoint(bytes))
return ElementOfUnknownGroup(XYTZ)
def bytes_to_element(bytes):
# this strictly only accepts elements in the right subgroup
P = bytes_to_unknown_group_element(bytes)
if P is Zero:
raise ValueError("element was Zero")
if not is_extended_zero(P.scalarmult(L).XYTZ):
raise ValueError("element is not in the right group")
# the point is in the expected 1*L subgroup, not in the 2/4/8 groups,
# or in the 2*L/4*L/8*L groups. Promote it to a correct-group Element.
return Element(P.XYTZ)

View File

@ -0,0 +1,213 @@
# MIT License
#
# Copyright (c) 2015 Brian Warner and other contributors
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import os
import base64
from . import _ed25519
BadSignatureError = _ed25519.BadSignatureError
def create_keypair(entropy=os.urandom):
SEEDLEN = int(_ed25519.SECRETKEYBYTES/2)
assert SEEDLEN == 32
seed = entropy(SEEDLEN)
sk = SigningKey(seed)
vk = sk.get_verifying_key()
return sk, vk
class BadPrefixError(Exception):
pass
def remove_prefix(s_bytes, prefix):
assert(type(s_bytes) == type(prefix))
if s_bytes[:len(prefix)] != prefix:
raise BadPrefixError("did not see expected '%s' prefix" % (prefix,))
return s_bytes[len(prefix):]
def to_ascii(s_bytes, prefix="", encoding="base64"):
"""Return a version-prefixed ASCII representation of the given binary
string. 'encoding' indicates how to do the encoding, and can be one of:
* base64
* base32
* base16 (or hex)
This function handles bytes, not bits, so it does not append any trailing
'=' (unlike standard base64.b64encode). It also lowercases the base32
output.
'prefix' will be prepended to the encoded form, and is useful for
distinguishing the purpose and version of the binary string. E.g. you
could prepend 'pub0-' to a VerifyingKey string to allow the receiving
code to raise a useful error if someone pasted in a signature string by
mistake.
"""
assert isinstance(s_bytes, bytes)
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
if encoding == "base64":
s_ascii = base64.b64encode(s_bytes).decode('ascii').rstrip("=")
elif encoding == "base32":
s_ascii = base64.b32encode(s_bytes).decode('ascii').rstrip("=").lower()
elif encoding in ("base16", "hex"):
s_ascii = base64.b16encode(s_bytes).decode('ascii').lower()
else:
raise NotImplementedError
return prefix+s_ascii.encode('ascii')
def from_ascii(s_ascii, prefix="", encoding="base64"):
"""This is the opposite of to_ascii. It will throw BadPrefixError if
the prefix is not found.
"""
if isinstance(s_ascii, bytes):
s_ascii = s_ascii.decode('ascii')
if isinstance(prefix, bytes):
prefix = prefix.decode('ascii')
s_ascii = remove_prefix(s_ascii.strip(), prefix)
if encoding == "base64":
s_ascii += "="*((4 - len(s_ascii)%4)%4)
s_bytes = base64.b64decode(s_ascii)
elif encoding == "base32":
s_ascii += "="*((8 - len(s_ascii)%8)%8)
s_bytes = base64.b32decode(s_ascii.upper())
elif encoding in ("base16", "hex"):
s_bytes = base64.b16decode(s_ascii.upper())
else:
raise NotImplementedError
return s_bytes
class SigningKey(object):
# this can only be used to reconstruct a key created by create_keypair().
def __init__(self, sk_s, prefix="", encoding=None):
assert isinstance(sk_s, bytes)
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
sk_s = remove_prefix(sk_s, prefix)
if encoding is not None:
sk_s = from_ascii(sk_s, encoding=encoding)
if len(sk_s) == 32:
# create from seed
vk_s, sk_s = _ed25519.publickey(sk_s)
else:
if len(sk_s) != 32+32:
raise ValueError("SigningKey takes 32-byte seed or 64-byte string")
self.sk_s = sk_s # seed+pubkey
self.vk_s = sk_s[32:] # just pubkey
def to_bytes(self, prefix=""):
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
return prefix+self.sk_s
def to_ascii(self, prefix="", encoding=None):
assert encoding
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
return to_ascii(self.to_seed(), prefix, encoding)
def to_seed(self, prefix=""):
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
return prefix+self.sk_s[:32]
def __eq__(self, them):
if not isinstance(them, object): return False
return (them.__class__ == self.__class__
and them.sk_s == self.sk_s)
def get_verifying_key(self):
return VerifyingKey(self.vk_s)
def sign(self, msg, prefix="", encoding=None):
assert isinstance(msg, bytes)
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
sig_and_msg = _ed25519.sign(msg, self.sk_s)
# the response is R+S+msg
sig_R = sig_and_msg[0:32]
sig_S = sig_and_msg[32:64]
msg_out = sig_and_msg[64:]
sig_out = sig_R + sig_S
assert msg_out == msg
if encoding:
return to_ascii(sig_out, prefix, encoding)
return prefix+sig_out
class VerifyingKey(object):
def __init__(self, vk_s, prefix="", encoding=None):
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
if not isinstance(vk_s, bytes):
vk_s = vk_s.encode('ascii')
assert isinstance(vk_s, bytes)
vk_s = remove_prefix(vk_s, prefix)
if encoding is not None:
vk_s = from_ascii(vk_s, encoding=encoding)
assert len(vk_s) == 32
self.vk_s = vk_s
def to_bytes(self, prefix=""):
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
return prefix+self.vk_s
def to_ascii(self, prefix="", encoding=None):
assert encoding
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
return to_ascii(self.vk_s, prefix, encoding)
def __eq__(self, them):
if not isinstance(them, object): return False
return (them.__class__ == self.__class__
and them.vk_s == self.vk_s)
def verify(self, sig, msg, prefix="", encoding=None):
if not isinstance(sig, bytes):
sig = sig.encode('ascii')
if not isinstance(prefix, bytes):
prefix = prefix.encode('ascii')
assert isinstance(sig, bytes)
assert isinstance(msg, bytes)
if encoding:
sig = from_ascii(sig, prefix, encoding)
else:
sig = remove_prefix(sig, prefix)
assert len(sig) == 64
sig_R = sig[:32]
sig_S = sig[32:]
sig_and_msg = sig_R + sig_S + msg
# this might raise BadSignatureError
msg2 = _ed25519.open(sig_and_msg, self.vk_s)
assert msg2 == msg
def selftest():
message = b"crypto libraries should always test themselves at powerup"
sk = SigningKey(b"priv0-VIsfn5OFGa09Un2MR6Hm7BQ5++xhcQskU2OGXG8jSJl4cWLZrRrVcSN2gVYMGtZT+3354J5jfmqAcuRSD9KIyg",
prefix="priv0-", encoding="base64")
vk = VerifyingKey(b"pub0-eHFi2a0a1XEjdoFWDBrWU/t9+eCeY35qgHLkUg/SiMo",
prefix="pub0-", encoding="base64")
assert sk.get_verifying_key() == vk
sig = sk.sign(message, prefix="sig0-", encoding="base64")
assert sig == b"sig0-E/QrwtSF52x8+q0l4ahA7eJbRKc777ClKNg217Q0z4fiYMCdmAOI+rTLVkiFhX6k3D+wQQfKdJYMxaTUFfv1DQ", sig
vk.verify(sig, message, prefix="sig0-", encoding="base64")
selftest()

View File

@ -0,0 +1,94 @@
# MIT License
#
# Copyright (c) 2015 Brian Warner and other contributors
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from RNS.Cryptography.Hashes import sha512
from .basic import (bytes_to_clamped_scalar,
bytes_to_scalar, scalar_to_bytes,
bytes_to_element, Base)
import hashlib, binascii
def H(m):
return sha512(m)
def publickey(seed):
# turn first half of SHA512(seed) into scalar, then into point
assert len(seed) == 32
a = bytes_to_clamped_scalar(H(seed)[:32])
A = Base.scalarmult(a)
return A.to_bytes()
def Hint(m):
h = H(m)
return int(binascii.hexlify(h[::-1]), 16)
def signature(m,sk,pk):
assert len(sk) == 32 # seed
assert len(pk) == 32
h = H(sk[:32])
a_bytes, inter = h[:32], h[32:]
a = bytes_to_clamped_scalar(a_bytes)
r = Hint(inter + m)
R = Base.scalarmult(r)
R_bytes = R.to_bytes()
S = r + Hint(R_bytes + pk + m) * a
return R_bytes + scalar_to_bytes(S)
def checkvalid(s, m, pk):
if len(s) != 64: raise Exception("signature length is wrong")
if len(pk) != 32: raise Exception("public-key length is wrong")
R = bytes_to_element(s[:32])
A = bytes_to_element(pk)
S = bytes_to_scalar(s[32:])
h = Hint(s[:32] + pk + m)
v1 = Base.scalarmult(S)
v2 = R.add(A.scalarmult(h))
return v1==v2
# wrappers
import os
def create_signing_key():
seed = os.urandom(32)
return seed
def create_verifying_key(signing_key):
return publickey(signing_key)
def sign(skbytes, msg):
"""Return just the signature, given the message and just the secret
key."""
if len(skbytes) != 32:
raise ValueError("Bad signing key length %d" % len(skbytes))
vkbytes = create_verifying_key(skbytes)
sig = signature(msg, skbytes, vkbytes)
return sig
def verify(vkbytes, sig, msg):
if len(vkbytes) != 32:
raise ValueError("Bad verifying key length %d" % len(vkbytes))
if len(sig) != 64:
raise ValueError("Bad signature length %d" % len(sig))
rc = checkvalid(sig, msg, vkbytes)
if not rc:
raise ValueError("rc != 0", rc)
return True

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -20,14 +20,11 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import base64
import math
import time
import RNS
from cryptography.fernet import Fernet
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.backends import default_backend
from RNS.Cryptography import Fernet
class Callbacks:
def __init__(self):
@ -72,8 +69,10 @@ class Destination:
OUT = 0x12;
directions = [IN, OUT]
PR_TAG_WINDOW = 30
@staticmethod
def full_name(app_name, *aspects):
def expand_name(identity, app_name, *aspects):
"""
:returns: A string containing the full human-readable name of the destination, for an app_name and a number of aspects.
"""
@ -84,23 +83,30 @@ class Destination:
name = app_name
for aspect in aspects:
if "." in aspect: raise ValueError("Dots can't be used in aspects")
name = name + "." + aspect
name += "." + aspect
if identity != None:
name += "." + identity.hexhash
return name
@staticmethod
def hash(app_name, *aspects):
def hash(identity, app_name, *aspects):
"""
:returns: A destination name in adressable hash form, for an app_name and a number of aspects.
"""
name = Destination.full_name(app_name, *aspects)
name_hash = RNS.Identity.full_hash(Destination.expand_name(None, app_name, *aspects).encode("utf-8"))[:(RNS.Identity.NAME_HASH_LENGTH//8)]
addr_hash_material = name_hash
if identity != None:
if isinstance(identity, RNS.Identity):
addr_hash_material += identity.hash
elif isinstance(identity, bytes) and len(identity) == RNS.Reticulum.TRUNCATED_HASHLENGTH//8:
addr_hash_material += identity
else:
raise TypeError("Invalid material supplied for destination hash calculation")
# Create a digest for the destination
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
digest.update(name.encode("UTF-8"))
return digest.finalize()[:10]
return RNS.Identity.full_hash(addr_hash_material)[:RNS.Reticulum.TRUNCATED_HASHLENGTH//8]
@staticmethod
def app_and_aspects_from_name(full_name):
@ -116,14 +122,16 @@ class Destination:
:returns: A destination name in adressable hash form, for a full name string and Identity instance.
"""
app_name, aspects = Destination.app_and_aspects_from_name(full_name)
aspects.append(identity.hexhash)
return Destination.hash(app_name, *aspects)
return Destination.hash(identity, app_name, *aspects)
def __init__(self, identity, direction, type, app_name, *aspects):
# Check input values and build name string
if "." in app_name: raise ValueError("Dots can't be used in app names")
if not type in Destination.types: raise ValueError("Unknown destination type")
if not direction in Destination.directions: raise ValueError("Unknown destination direction")
self.accept_link_requests = True
self.callbacks = Callbacks()
self.request_handlers = {}
self.type = type
@ -131,11 +139,9 @@ class Destination:
self.proof_strategy = Destination.PROVE_NONE
self.mtu = 0
self.path_responses = {}
self.links = []
if identity != None and type == Destination.SINGLE:
aspects = aspects+(identity.hexhash,)
if identity == None and direction == Destination.IN and self.type != Destination.PLAIN:
identity = RNS.Identity()
aspects = aspects+(identity.hexhash,)
@ -144,12 +150,14 @@ class Destination:
raise TypeError("Selected destination type PLAIN cannot hold an identity")
self.identity = identity
self.name = Destination.expand_name(identity, app_name, *aspects)
self.name = Destination.full_name(app_name, *aspects)
self.hash = Destination.hash(app_name, *aspects)
# Generate the destination address hash
self.hash = Destination.hash(self.identity, app_name, *aspects)
self.name_hash = RNS.Identity.full_hash(self.expand_name(None, app_name, *aspects).encode("utf-8"))[:(RNS.Identity.NAME_HASH_LENGTH//8)]
self.hexhash = self.hash.hex()
self.default_app_data = None
self.default_app_data = None
self.callback = None
self.proofcallback = None
@ -163,7 +171,7 @@ class Destination:
return "<"+self.name+"/"+self.hexhash+">"
def announce(self, app_data=None, path_response=False):
def announce(self, app_data=None, path_response=False, attached_interface=None, tag=None, send=True):
"""
Creates an announce packet for this destination and broadcasts it on all
relevant interfaces. Application specific data can be added to the announce.
@ -173,43 +181,90 @@ class Destination:
"""
if self.type != Destination.SINGLE:
raise TypeError("Only SINGLE destination types can be announced")
destination_hash = self.hash
random_hash = RNS.Identity.get_random_hash()[0:5]+int(time.time()).to_bytes(5, "big")
if app_data == None and self.default_app_data != None:
if isinstance(self.default_app_data, bytes):
app_data = self.default_app_data
elif callable(self.default_app_data):
returned_app_data = self.default_app_data()
if isinstance(returned_app_data, bytes):
app_data = returned_app_data
if self.direction != Destination.IN:
raise TypeError("Only IN destination types can be announced")
signed_data = self.hash+self.identity.get_public_key()+random_hash
if app_data != None:
signed_data += app_data
now = time.time()
stale_responses = []
for entry_tag in self.path_responses:
entry = self.path_responses[entry_tag]
if now > entry[0]+Destination.PR_TAG_WINDOW:
stale_responses.append(entry_tag)
signature = self.identity.sign(signed_data)
for entry_tag in stale_responses:
self.path_responses.pop(entry_tag)
announce_data = self.identity.get_public_key()+random_hash+signature
if (path_response == True and tag != None) and tag in self.path_responses:
# This code is currently not used, since Transport will block duplicate
# path requests based on tags. When multi-path support is implemented in
# Transport, this will allow Transport to detect redundant paths to the
# same destination, and select the best one based on chosen criteria,
# since it will be able to detect that a single emitted announce was
# received via multiple paths. The difference in reception time will
# potentially also be useful in determining characteristics of the
# multiple available paths, and to choose the best one.
RNS.log("Using cached announce data for answering path request with tag "+RNS.prettyhexrep(tag), RNS.LOG_EXTREME)
announce_data = self.path_responses[tag][1]
else:
destination_hash = self.hash
random_hash = RNS.Identity.get_random_hash()[0:5]+int(time.time()).to_bytes(5, "big")
if app_data != None:
announce_data += app_data
if app_data == None and self.default_app_data != None:
if isinstance(self.default_app_data, bytes):
app_data = self.default_app_data
elif callable(self.default_app_data):
returned_app_data = self.default_app_data()
if isinstance(returned_app_data, bytes):
app_data = returned_app_data
signed_data = self.hash+self.identity.get_public_key()+self.name_hash+random_hash
if app_data != None:
signed_data += app_data
signature = self.identity.sign(signed_data)
announce_data = self.identity.get_public_key()+self.name_hash+random_hash+signature
if app_data != None:
announce_data += app_data
self.path_responses[tag] = [time.time(), announce_data]
if path_response:
announce_context = RNS.Packet.PATH_RESPONSE
else:
announce_context = RNS.Packet.NONE
RNS.Packet(self, announce_data, RNS.Packet.ANNOUNCE, context = announce_context).send()
announce_packet = RNS.Packet(self, announce_data, RNS.Packet.ANNOUNCE, context = announce_context, attached_interface = attached_interface)
if send:
announce_packet.send()
else:
return announce_packet
def accepts_links(self, accepts = None):
"""
Set or query whether the destination accepts incoming link requests.
:param accepts: If ``True`` or ``False``, this method sets whether the destination accepts incoming link requests. If not provided or ``None``, the method returns whether the destination currently accepts link requests.
:returns: ``True`` or ``False`` depending on whether the destination accepts incoming link requests, if the *accepts* parameter is not provided or ``None``.
"""
if accepts == None:
return self.accept_link_requests
if accepts:
self.accept_link_requests = True
else:
self.accept_link_requests = False
def set_link_established_callback(self, callback):
"""
Registers a function to be called when a link has been established to
this destination.
:param callback: A function or method to be called.
:param callback: A function or method with the signature *callback(link)* to be called when a new link is established with this destination.
"""
self.callbacks.link_established = callback
@ -218,7 +273,7 @@ class Destination:
Registers a function to be called when a packet has been received by
this destination.
:param callback: A function or method to be called.
:param callback: A function or method with the signature *callback(data, packet)* to be called when this destination receives a packet.
"""
self.callbacks.packet = callback
@ -228,7 +283,7 @@ class Destination:
a packet sent to this destination. Allows control over when and if
proofs should be returned for received packets.
:param callback: A function or method to be called. The callback must return one of True or False. If the callback returns True, a proof will be sent. If it returns False, a proof will not be sent.
:param callback: A function or method to with the signature *callback(packet)* be called when a packet that requests a proof is received. The callback must return one of True or False. If the callback returns True, a proof will be sent. If it returns False, a proof will not be sent.
"""
self.callbacks.proof_requested = callback
@ -249,7 +304,7 @@ class Destination:
Registers a request handler.
:param path: The path for the request handler to be registered.
:param response_generator: A function or method with the signature *response_generator(path, data, request_id, remote_identity, requested_at)* to be called. Whatever this funcion returns will be sent as a response to the requester. If the function returns ``None``, no response will be sent.
:param response_generator: A function or method with the signature *response_generator(path, data, request_id, link_id, remote_identity, requested_at)* to be called. Whatever this funcion returns will be sent as a response to the requester. If the function returns ``None``, no response will be sent.
:param allow: One of ``RNS.Destination.ALLOW_NONE``, ``RNS.Destination.ALLOW_ALL`` or ``RNS.Destination.ALLOW_LIST``. If ``RNS.Destination.ALLOW_LIST`` is set, the request handler will only respond to requests for identified peers in the supplied list.
:param allowed_list: A list of *bytes-like* :ref:`RNS.Identity<api-identity>` hashes.
:raises: ``ValueError`` if any of the supplied arguments are invalid.
@ -298,9 +353,10 @@ class Destination:
def incoming_link_request(self, data, packet):
link = RNS.Link.validate_request(self, data, packet)
if link != None:
self.links.append(link)
if self.accept_link_requests:
link = RNS.Link.validate_request(self, data, packet)
if link != None:
self.links.append(link)
def create_keys(self):
"""
@ -315,8 +371,8 @@ class Destination:
raise TypeError("A single destination holds keys through an Identity instance")
if self.type == Destination.GROUP:
self.prv_bytes = base64.urlsafe_b64decode(Fernet.generate_key())
self.prv = Fernet(base64.urlsafe_b64encode(self.prv_bytes))
self.prv_bytes = Fernet.generate_key()
self.prv = Fernet(self.prv_bytes)
def get_private_key(self):
@ -348,7 +404,7 @@ class Destination:
if self.type == Destination.GROUP:
self.prv_bytes = key
self.prv = Fernet(base64.urlsafe_b64encode(self.prv_bytes))
self.prv = Fernet(self.prv_bytes)
def load_public_key(self, key):
if self.type != Destination.SINGLE:
@ -373,7 +429,7 @@ class Destination:
if self.type == Destination.GROUP:
if hasattr(self, "prv") and self.prv != None:
try:
return base64.urlsafe_b64decode(self.prv.encrypt(plaintext))
return self.prv.encrypt(plaintext)
except Exception as e:
RNS.log("The GROUP destination could not encrypt data", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
@ -398,7 +454,7 @@ class Destination:
if self.type == Destination.GROUP:
if hasattr(self, "prv") and self.prv != None:
try:
return self.prv.decrypt(base64.urlsafe_b64encode(ciphertext))
return self.prv.decrypt(ciphertext)
except Exception as e:
RNS.log("The GROUP destination could not decrypt data", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -20,23 +20,18 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import base64
import math
import os
import RNS
import time
import atexit
import base64
from .vendor import umsgpack as umsgpack
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey, Ed25519PublicKey
from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey, X25519PublicKey
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
from cryptography.fernet import Fernet
import hashlib
from .vendor import umsgpack as umsgpack
from RNS.Cryptography import X25519PrivateKey, X25519PublicKey, Ed25519PrivateKey, Ed25519PublicKey
from RNS.Cryptography import Fernet
cio_default_backend = default_backend()
class Identity:
"""
@ -58,13 +53,12 @@ class Identity:
"""
# Non-configurable constants
FERNET_VERSION = 0x80
FERNET_OVERHEAD = 57 # In bytes
OPTIMISED_FERNET_OVERHEAD = 54 # In bytes
FERNET_OVERHEAD = RNS.Cryptography.Fernet.FERNET_OVERHEAD
AES128_BLOCKSIZE = 16 # In bytes
HASHLENGTH = 256 # In bits
SIGLENGTH = KEYSIZE # In bits
NAME_HASH_LENGTH = 80
TRUNCATED_HASHLENGTH = RNS.Reticulum.TRUNCATED_HASHLENGTH
"""
Constant specifying the truncated hash length (in bits) used by Reticulum
@ -97,6 +91,13 @@ class Identity:
identity.app_data = identity_data[3]
return identity
else:
for registered_destination in RNS.Transport.destinations:
if destination_hash == registered_destination.hash:
identity = Identity(create_keys=False)
identity.load_public_key(registered_destination.identity.get_public_key())
identity.app_data = None
return identity
return None
@staticmethod
@ -115,7 +116,26 @@ class Identity:
@staticmethod
def save_known_destinations():
# TODO: Improve the storage method so we don't have to
# deserialize and serialize the entire table on every
# save, but the only changes. It might be possible to
# simply overwrite on exit now that every local client
# disconnect triggers a data persist.
try:
if hasattr(Identity, "saving_known_destinations"):
wait_interval = 0.2
wait_timeout = 5
wait_start = time.time()
while Identity.saving_known_destinations:
time.sleep(wait_interval)
if time.time() > wait_start+wait_timeout:
RNS.log("Could not save known destinations to storage, waiting for previous save operation timed out.", RNS.LOG_ERROR)
return False
Identity.saving_known_destinations = True
save_start = time.time()
storage_known_destinations = {}
if os.path.isfile(RNS.Reticulum.storagepath+"/known_destinations"):
try:
@ -125,27 +145,48 @@ class Identity:
except:
pass
for destination_hash in storage_known_destinations:
if not destination_hash in Identity.known_destinations:
Identity.known_destinations[destination_hash] = storage_known_destinations[destination_hash]
try:
for destination_hash in storage_known_destinations:
if not destination_hash in Identity.known_destinations:
Identity.known_destinations[destination_hash] = storage_known_destinations[destination_hash]
except Exception as e:
RNS.log("Skipped recombining known destinations from disk, since an error occurred: "+str(e), RNS.LOG_WARNING)
RNS.log("Saving known destinations to storage...", RNS.LOG_VERBOSE)
RNS.log("Saving "+str(len(Identity.known_destinations))+" known destinations to storage...", RNS.LOG_DEBUG)
file = open(RNS.Reticulum.storagepath+"/known_destinations","wb")
umsgpack.dump(Identity.known_destinations, file)
file.close()
RNS.log("Done saving known destinations to storage", RNS.LOG_VERBOSE)
save_time = time.time() - save_start
if save_time < 1:
time_str = str(round(save_time*1000,2))+"ms"
else:
time_str = str(round(save_time,2))+"s"
RNS.log("Saved known destinations to storage in "+time_str, RNS.LOG_DEBUG)
except Exception as e:
RNS.log("Error while saving known destinations to disk, the contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.trace_exception(e)
Identity.saving_known_destinations = False
@staticmethod
def load_known_destinations():
if os.path.isfile(RNS.Reticulum.storagepath+"/known_destinations"):
try:
file = open(RNS.Reticulum.storagepath+"/known_destinations","rb")
Identity.known_destinations = umsgpack.load(file)
loaded_known_destinations = umsgpack.load(file)
file.close()
Identity.known_destinations = {}
for known_destination in loaded_known_destinations:
if len(known_destination) == RNS.Reticulum.TRUNCATED_HASHLENGTH//8:
Identity.known_destinations[known_destination] = loaded_known_destinations[known_destination]
RNS.log("Loaded "+str(len(Identity.known_destinations))+" known destination from storage", RNS.LOG_VERBOSE)
except:
except Exception as e:
RNS.log("Error loading known destinations from disk, file will be recreated on exit", RNS.LOG_ERROR)
else:
RNS.log("Destinations file does not exist, no known destinations loaded", RNS.LOG_VERBOSE)
@ -158,10 +199,7 @@ class Identity:
:param data: Data to be hashed as *bytes*.
:returns: SHA-256 hash as *bytes*
"""
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
digest.update(data)
return digest.finalize()
return RNS.Cryptography.sha256(data)
@staticmethod
def truncated_hash(data):
@ -184,38 +222,73 @@ class Identity:
return Identity.truncated_hash(os.urandom(Identity.TRUNCATED_HASHLENGTH//8))
@staticmethod
def validate_announce(packet):
def validate_announce(packet, only_validate_signature=False):
try:
if packet.packet_type == RNS.Packet.ANNOUNCE:
destination_hash = packet.destination_hash
public_key = packet.data[:Identity.KEYSIZE//8]
random_hash = packet.data[Identity.KEYSIZE//8:Identity.KEYSIZE//8+10]
signature = packet.data[Identity.KEYSIZE//8+10:Identity.KEYSIZE//8+10+Identity.KEYSIZE//8]
name_hash = packet.data[Identity.KEYSIZE//8:Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8]
random_hash = packet.data[Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8:Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10]
signature = packet.data[Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10:Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8]
app_data = b""
if len(packet.data) > Identity.KEYSIZE//8+10+Identity.KEYSIZE//8:
app_data = packet.data[Identity.KEYSIZE//8+10+Identity.KEYSIZE//8:]
if len(packet.data) > Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8:
app_data = packet.data[Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8:]
signed_data = destination_hash+public_key+random_hash+app_data
signed_data = destination_hash+public_key+name_hash+random_hash+app_data
if not len(packet.data) > Identity.KEYSIZE//8+10+Identity.KEYSIZE//8:
if not len(packet.data) > Identity.KEYSIZE//8+Identity.NAME_HASH_LENGTH//8+10+Identity.SIGLENGTH//8:
app_data = None
announced_identity = Identity(create_keys=False)
announced_identity.load_public_key(public_key)
if announced_identity.pub != None and announced_identity.validate(signature, signed_data):
RNS.Identity.remember(packet.get_hash(), destination_hash, public_key, app_data)
del announced_identity
if only_validate_signature:
del announced_identity
return True
hash_material = name_hash+announced_identity.hash
expected_hash = RNS.Identity.full_hash(hash_material)[:RNS.Reticulum.TRUNCATED_HASHLENGTH//8]
if destination_hash == expected_hash:
# Check if we already have a public key for this destination
# and make sure the public key is not different.
if destination_hash in Identity.known_destinations:
if public_key != Identity.known_destinations[destination_hash][2]:
# In reality, this should never occur, but in the odd case
# that someone manages a hash collision, we reject the announce.
RNS.log("Received announce with valid signature and destination hash, but announced public key does not match already known public key.", RNS.LOG_CRITICAL)
RNS.log("This may indicate an attempt to modify network paths, or a random hash collision. The announce was rejected.", RNS.LOG_CRITICAL)
return False
RNS.Identity.remember(packet.get_hash(), destination_hash, public_key, app_data)
del announced_identity
if packet.rssi != None or packet.snr != None:
signal_str = " ["
if packet.rssi != None:
signal_str += "RSSI "+str(packet.rssi)+"dBm"
if packet.snr != None:
signal_str += ", "
if packet.snr != None:
signal_str += "SNR "+str(packet.snr)+"dB"
signal_str += "]"
else:
signal_str = ""
if hasattr(packet, "transport_id") and packet.transport_id != None:
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received via "+RNS.prettyhexrep(packet.transport_id)+" on "+str(packet.receiving_interface)+signal_str, RNS.LOG_EXTREME)
else:
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received on "+str(packet.receiving_interface)+signal_str, RNS.LOG_EXTREME)
return True
if hasattr(packet, "transport_id") and packet.transport_id != None:
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received via "+RNS.prettyhexrep(packet.transport_id)+" on "+str(packet.receiving_interface), RNS.LOG_EXTREME)
else:
RNS.log("Valid announce for "+RNS.prettyhexrep(destination_hash)+" "+str(packet.hops)+" hops away, received on "+str(packet.receiving_interface), RNS.LOG_EXTREME)
return True
RNS.log("Received invalid announce for "+RNS.prettyhexrep(destination_hash)+": Destination mismatch.", RNS.LOG_DEBUG)
return False
else:
RNS.log("Received invalid announce for "+RNS.prettyhexrep(destination_hash), RNS.LOG_DEBUG)
RNS.log("Received invalid announce for "+RNS.prettyhexrep(destination_hash)+": Invalid signature.", RNS.LOG_DEBUG)
del announced_identity
return False
@ -223,9 +296,14 @@ class Identity:
RNS.log("Error occurred while validating announce. The contained exception was: "+str(e), RNS.LOG_ERROR)
return False
@staticmethod
def persist_data():
if not RNS.Transport.owner.is_connected_to_shared_instance:
Identity.save_known_destinations()
@staticmethod
def exit_handler():
Identity.save_known_destinations()
Identity.persist_data()
@staticmethod
@ -297,30 +375,16 @@ class Identity:
def create_keys(self):
self.prv = X25519PrivateKey.generate()
self.prv_bytes = self.prv.private_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PrivateFormat.Raw,
encryption_algorithm=serialization.NoEncryption()
)
self.prv_bytes = self.prv.private_bytes()
self.sig_prv = Ed25519PrivateKey.generate()
self.sig_prv_bytes = self.sig_prv.private_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PrivateFormat.Raw,
encryption_algorithm=serialization.NoEncryption()
)
self.sig_prv_bytes = self.sig_prv.private_bytes()
self.pub = self.prv.public_key()
self.pub_bytes = self.pub.public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
self.pub_bytes = self.pub.public_bytes()
self.sig_pub = self.sig_prv.public_key()
self.sig_pub_bytes = self.sig_pub.public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
self.sig_pub_bytes = self.sig_pub.public_bytes()
self.update_hashes()
@ -352,16 +416,10 @@ class Identity:
self.sig_prv = Ed25519PrivateKey.from_private_bytes(self.sig_prv_bytes)
self.pub = self.prv.public_key()
self.pub_bytes = self.pub.public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
self.pub_bytes = self.pub.public_bytes()
self.sig_pub = self.sig_prv.public_key()
self.sig_pub_bytes = self.sig_pub.public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
self.sig_pub_bytes = self.sig_pub.public_bytes()
self.update_hashes()
@ -403,7 +461,7 @@ class Identity:
return False
except Exception as e:
RNS.log("Error while loading identity from "+str(path), RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e))
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
def get_salt(self):
return self.hash
@ -421,24 +479,19 @@ class Identity:
"""
if self.pub != None:
ephemeral_key = X25519PrivateKey.generate()
ephemeral_pub_bytes = ephemeral_key.public_key().public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
ephemeral_pub_bytes = ephemeral_key.public_key().public_bytes()
shared_key = ephemeral_key.exchange(self.pub)
# TODO: Improve this re-allocation of HKDF
derived_key = HKDF(
algorithm=hashes.SHA256(),
derived_key = RNS.Cryptography.hkdf(
length=32,
derive_from=shared_key,
salt=self.get_salt(),
info=self.get_context(),
backend=cio_default_backend,
).derive(shared_key)
context=self.get_context(),
)
fernet = Fernet(base64.urlsafe_b64encode(derived_key))
ciphertext = base64.urlsafe_b64decode(fernet.encrypt(plaintext))
fernet = Fernet(derived_key)
ciphertext = fernet.encrypt(plaintext)
token = ephemeral_pub_bytes+ciphertext
return token
@ -463,18 +516,16 @@ class Identity:
shared_key = self.prv.exchange(peer_pub)
# TODO: Improve this re-allocation of HKDF
derived_key = HKDF(
algorithm=hashes.SHA256(),
derived_key = RNS.Cryptography.hkdf(
length=32,
derive_from=shared_key,
salt=self.get_salt(),
info=self.get_context(),
backend=cio_default_backend,
).derive(shared_key)
context=self.get_context(),
)
fernet = Fernet(base64.urlsafe_b64encode(derived_key))
fernet = Fernet(derived_key)
ciphertext = ciphertext_token[Identity.KEYSIZE//8//2:]
plaintext = fernet.decrypt(base64.urlsafe_b64encode(ciphertext))
plaintext = fernet.decrypt(ciphertext)
except Exception as e:
RNS.log("Decryption by "+RNS.prettyhexrep(self.hash)+" failed: "+str(e), RNS.LOG_DEBUG)

View File

@ -77,8 +77,9 @@ class AX25KISSInterface(Interface):
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
RNS.panic()
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 564
self.pyserial = serial
self.serial = None
@ -151,7 +152,7 @@ class AX25KISSInterface(Interface):
# Allow time for interface to initialise before config
sleep(2.0)
thread = threading.Thread(target=self.readLoop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.online = True
RNS.log("Serial port "+self.port+" is now open")
@ -304,7 +305,7 @@ class AX25KISSInterface(Interface):
in_frame = True
command = KISS.CMD_UNKNOWN
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU+AX25.HEADER_SIZE):
elif (in_frame and len(data_buffer) < self.HW_MTU+AX25.HEADER_SIZE):
if (len(data_buffer) == 0 and command == KISS.CMD_UNKNOWN):
# We only support one HDLC port for now, so
# strip off the port nibble
@ -365,5 +366,8 @@ class AX25KISSInterface(Interface):
RNS.log("Reconnected serial port for "+str(self))
def should_ingress_limit(self):
return False
def __str__(self):
return "AX25KISSInterface["+self.name+"]"

View File

@ -0,0 +1,409 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from RNS.Interfaces.Interface import Interface
from time import sleep
import sys
import threading
import time
import RNS
class KISS():
FEND = 0xC0
FESC = 0xDB
TFEND = 0xDC
TFESC = 0xDD
CMD_UNKNOWN = 0xFE
CMD_DATA = 0x00
CMD_TXDELAY = 0x01
CMD_P = 0x02
CMD_SLOTTIME = 0x03
CMD_TXTAIL = 0x04
CMD_FULLDUPLEX = 0x05
CMD_SETHARDWARE = 0x06
CMD_READY = 0x0F
CMD_RETURN = 0xFF
@staticmethod
def escape(data):
data = data.replace(bytes([0xdb]), bytes([0xdb, 0xdd]))
data = data.replace(bytes([0xc0]), bytes([0xdb, 0xdc]))
return data
class KISSInterface(Interface):
MAX_CHUNK = 32768
BITRATE_GUESS = 1200
owner = None
port = None
speed = None
databits = None
parity = None
stopbits = None
serial = None
def __init__(self, owner, name, port, speed, databits, parity, stopbits, preamble, txtail, persistence, slottime, flow_control, beacon_interval, beacon_data):
import importlib
if RNS.vendor.platformutils.is_android():
self.on_android = True
if importlib.util.find_spec('usbserial4a') != None:
if importlib.util.find_spec('jnius') == None:
RNS.log("Could not load jnius API wrapper for Android, KISS interface cannot be created.", RNS.LOG_CRITICAL)
RNS.log("This probably means you are trying to use an USB-based interface from within Termux or similar.", RNS.LOG_CRITICAL)
RNS.log("This is currently not possible, due to this environment limiting access to the native Android APIs.", RNS.LOG_CRITICAL)
RNS.panic()
from usbserial4a import serial4a as serial
self.parity = "N"
else:
RNS.log("Could not load USB serial module for Android, KISS interface cannot be created.", RNS.LOG_CRITICAL)
RNS.log("You can install this module by issuing: pip install usbserial4a", RNS.LOG_CRITICAL)
RNS.panic()
else:
raise SystemError("Android-specific interface was used on non-Android OS")
super().__init__()
self.HW_MTU = 564
if beacon_data == None:
beacon_data = ""
self.pyserial = serial
self.serial = None
self.owner = owner
self.name = name
self.port = port
self.speed = speed
self.databits = databits
self.parity = "N"
self.stopbits = stopbits
self.timeout = 100
self.online = False
self.beacon_i = beacon_interval
self.beacon_d = beacon_data.encode("utf-8")
self.first_tx = None
self.bitrate = KISSInterface.BITRATE_GUESS
self.packet_queue = []
self.flow_control = flow_control
self.interface_ready = False
self.flow_control_timeout = 5
self.flow_control_locked = time.time()
self.preamble = preamble if preamble != None else 350;
self.txtail = txtail if txtail != None else 20;
self.persistence = persistence if persistence != None else 64;
self.slottime = slottime if slottime != None else 20;
if parity.lower() == "e" or parity.lower() == "even":
self.parity = "E"
if parity.lower() == "o" or parity.lower() == "odd":
self.parity = "O"
try:
self.open_port()
except Exception as e:
RNS.log("Could not open serial port "+self.port, RNS.LOG_ERROR)
raise e
if self.serial.is_open:
self.configure_device()
else:
raise IOError("Could not open serial port")
def open_port(self):
RNS.log("Opening serial port "+self.port+"...")
# Get device parameters
from usb4a import usb
device = usb.get_usb_device(self.port)
if device:
vid = device.getVendorId()
pid = device.getProductId()
# Driver overrides for speficic chips
proxy = self.pyserial.get_serial_port
if vid == 0x1A86 and pid == 0x55D4:
# Force CDC driver for Qinheng CH34x
RNS.log(str(self)+" using CDC driver for "+RNS.hexrep(vid)+":"+RNS.hexrep(pid), RNS.LOG_DEBUG)
from usbserial4a.cdcacmserial4a import CdcAcmSerial
proxy = CdcAcmSerial
self.serial = proxy(
self.port,
baudrate = self.speed,
bytesize = self.databits,
parity = self.parity,
stopbits = self.stopbits,
xonxoff = False,
rtscts = False,
timeout = None,
inter_byte_timeout = None,
# write_timeout = wtimeout,
dsrdtr = False,
)
if vid == 0x0403:
# Hardware parameters for FTDI devices @ 115200 baud
self.serial.DEFAULT_READ_BUFFER_SIZE = 16 * 1024
self.serial.USB_READ_TIMEOUT_MILLIS = 100
self.serial.timeout = 0.1
elif vid == 0x10C4:
# Hardware parameters for SiLabs CP210x @ 115200 baud
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
self.serial.USB_READ_TIMEOUT_MILLIS = 12
self.serial.timeout = 0.012
elif vid == 0x1A86 and pid == 0x55D4:
# Hardware parameters for Qinheng CH34x @ 115200 baud
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
self.serial.USB_READ_TIMEOUT_MILLIS = 12
self.serial.timeout = 0.1
else:
# Default values
self.serial.DEFAULT_READ_BUFFER_SIZE = 1 * 1024
self.serial.USB_READ_TIMEOUT_MILLIS = 100
self.serial.timeout = 0.1
RNS.log(str(self)+" USB read buffer size set to "+RNS.prettysize(self.serial.DEFAULT_READ_BUFFER_SIZE), RNS.LOG_DEBUG)
RNS.log(str(self)+" USB read timeout set to "+str(self.serial.USB_READ_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
RNS.log(str(self)+" USB write timeout set to "+str(self.serial.USB_WRITE_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
def configure_device(self):
# Allow time for interface to initialise before config
sleep(2.0)
thread = threading.Thread(target=self.readLoop)
thread.daemon = True
thread.start()
self.online = True
RNS.log("Serial port "+self.port+" is now open")
RNS.log("Configuring KISS interface parameters...")
self.setPreamble(self.preamble)
self.setTxTail(self.txtail)
self.setPersistence(self.persistence)
self.setSlotTime(self.slottime)
self.setFlowControl(self.flow_control)
self.interface_ready = True
RNS.log("KISS interface configured")
def setPreamble(self, preamble):
preamble_ms = preamble
preamble = int(preamble_ms / 10)
if preamble < 0:
preamble = 0
if preamble > 255:
preamble = 255
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_TXDELAY])+bytes([preamble])+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("Could not configure KISS interface preamble to "+str(preamble_ms)+" (command value "+str(preamble)+")")
def setTxTail(self, txtail):
txtail_ms = txtail
txtail = int(txtail_ms / 10)
if txtail < 0:
txtail = 0
if txtail > 255:
txtail = 255
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_TXTAIL])+bytes([txtail])+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("Could not configure KISS interface TX tail to "+str(txtail_ms)+" (command value "+str(txtail)+")")
def setPersistence(self, persistence):
if persistence < 0:
persistence = 0
if persistence > 255:
persistence = 255
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_P])+bytes([persistence])+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("Could not configure KISS interface persistence to "+str(persistence))
def setSlotTime(self, slottime):
slottime_ms = slottime
slottime = int(slottime_ms / 10)
if slottime < 0:
slottime = 0
if slottime > 255:
slottime = 255
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_SLOTTIME])+bytes([slottime])+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("Could not configure KISS interface slot time to "+str(slottime_ms)+" (command value "+str(slottime)+")")
def setFlowControl(self, flow_control):
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_READY])+bytes([0x01])+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
if (flow_control):
raise IOError("Could not enable KISS interface flow control")
else:
raise IOError("Could not enable KISS interface flow control")
def processIncoming(self, data):
self.rxb += len(data)
def af():
self.owner.inbound(data, self)
threading.Thread(target=af, daemon=True).start()
def processOutgoing(self,data):
datalen = len(data)
if self.online:
if self.interface_ready:
if self.flow_control:
self.interface_ready = False
self.flow_control_locked = time.time()
data = data.replace(bytes([0xdb]), bytes([0xdb])+bytes([0xdd]))
data = data.replace(bytes([0xc0]), bytes([0xdb])+bytes([0xdc]))
frame = bytes([KISS.FEND])+bytes([0x00])+data+bytes([KISS.FEND])
written = self.serial.write(frame)
self.txb += datalen
if data == self.beacon_d:
self.first_tx = None
else:
if self.first_tx == None:
self.first_tx = time.time()
if written != len(frame):
raise IOError("Serial interface only wrote "+str(written)+" bytes of "+str(len(data)))
else:
self.queue(data)
def queue(self, data):
self.packet_queue.append(data)
def process_queue(self):
if len(self.packet_queue) > 0:
data = self.packet_queue.pop(0)
self.interface_ready = True
self.processOutgoing(data)
elif len(self.packet_queue) == 0:
self.interface_ready = True
def readLoop(self):
try:
in_frame = False
escape = False
command = KISS.CMD_UNKNOWN
data_buffer = b""
last_read_ms = int(time.time()*1000)
while self.serial.is_open:
serial_bytes = self.serial.read()
got = len(serial_bytes)
for byte in serial_bytes:
last_read_ms = int(time.time()*1000)
if (in_frame and byte == KISS.FEND and command == KISS.CMD_DATA):
in_frame = False
self.processIncoming(data_buffer)
elif (byte == KISS.FEND):
in_frame = True
command = KISS.CMD_UNKNOWN
data_buffer = b""
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (len(data_buffer) == 0 and command == KISS.CMD_UNKNOWN):
# We only support one HDLC port for now, so
# strip off the port nibble
byte = byte & 0x0F
command = byte
elif (command == KISS.CMD_DATA):
if (byte == KISS.FESC):
escape = True
else:
if (escape):
if (byte == KISS.TFEND):
byte = KISS.FEND
if (byte == KISS.TFESC):
byte = KISS.FESC
escape = False
data_buffer = data_buffer+bytes([byte])
elif (command == KISS.CMD_READY):
self.process_queue()
if got == 0:
time_since_last = int(time.time()*1000) - last_read_ms
if len(data_buffer) > 0 and time_since_last > self.timeout:
data_buffer = b""
in_frame = False
command = KISS.CMD_UNKNOWN
escape = False
sleep(0.05)
if self.flow_control:
if not self.interface_ready:
if time.time() > self.flow_control_locked + self.flow_control_timeout:
RNS.log("Interface "+str(self)+" is unlocking flow control due to time-out. This should not happen. Your hardware might have missed a flow-control READY command, or maybe it does not support flow-control.", RNS.LOG_WARNING)
self.process_queue()
if self.beacon_i != None and self.beacon_d != None:
if self.first_tx != None:
if time.time() > self.first_tx + self.beacon_i:
RNS.log("Interface "+str(self)+" is transmitting beacon data: "+str(self.beacon_d.decode("utf-8")), RNS.LOG_DEBUG)
self.first_tx = None
self.processOutgoing(self.beacon_d)
except Exception as e:
self.online = False
RNS.log("A serial port error occurred, the contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.log("The interface "+str(self)+" experienced an unrecoverable error and is now offline.", RNS.LOG_ERROR)
if RNS.Reticulum.panic_on_interface_error:
RNS.panic()
RNS.log("Reticulum will attempt to reconnect the interface periodically.", RNS.LOG_ERROR)
self.online = False
self.serial.close()
self.reconnect_port()
def reconnect_port(self):
while not self.online:
try:
time.sleep(5)
RNS.log("Attempting to reconnect serial port "+str(self.port)+" for "+str(self)+"...", RNS.LOG_VERBOSE)
self.open_port()
if self.serial.is_open:
self.configure_device()
except Exception as e:
RNS.log("Error while reconnecting port, the contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.log("Reconnected serial port for "+str(self))
def should_ingress_limit(self):
return False
def __str__(self):
return "KISSInterface["+self.name+"]"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,260 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from RNS.Interfaces.Interface import Interface
from time import sleep
import sys
import threading
import time
import RNS
class HDLC():
# The Serial Interface packetizes data using
# simplified HDLC framing, similar to PPP
FLAG = 0x7E
ESC = 0x7D
ESC_MASK = 0x20
@staticmethod
def escape(data):
data = data.replace(bytes([HDLC.ESC]), bytes([HDLC.ESC, HDLC.ESC^HDLC.ESC_MASK]))
data = data.replace(bytes([HDLC.FLAG]), bytes([HDLC.ESC, HDLC.FLAG^HDLC.ESC_MASK]))
return data
class SerialInterface(Interface):
MAX_CHUNK = 32768
owner = None
port = None
speed = None
databits = None
parity = None
stopbits = None
serial = None
def __init__(self, owner, name, port, speed, databits, parity, stopbits):
import importlib
if RNS.vendor.platformutils.is_android():
self.on_android = True
if importlib.util.find_spec('usbserial4a') != None:
if importlib.util.find_spec('jnius') == None:
RNS.log("Could not load jnius API wrapper for Android, Serial interface cannot be created.", RNS.LOG_CRITICAL)
RNS.log("This probably means you are trying to use an USB-based interface from within Termux or similar.", RNS.LOG_CRITICAL)
RNS.log("This is currently not possible, due to this environment limiting access to the native Android APIs.", RNS.LOG_CRITICAL)
RNS.panic()
from usbserial4a import serial4a as serial
self.parity = "N"
else:
RNS.log("Could not load USB serial module for Android, Serial interface cannot be created.", RNS.LOG_CRITICAL)
RNS.log("You can install this module by issuing: pip install usbserial4a", RNS.LOG_CRITICAL)
RNS.panic()
else:
raise SystemError("Android-specific interface was used on non-Android OS")
super().__init__()
self.HW_MTU = 564
self.pyserial = serial
self.serial = None
self.owner = owner
self.name = name
self.port = port
self.speed = speed
self.databits = databits
self.parity = "N"
self.stopbits = stopbits
self.timeout = 100
self.online = False
self.bitrate = self.speed
if parity.lower() == "e" or parity.lower() == "even":
self.parity = "E"
if parity.lower() == "o" or parity.lower() == "odd":
self.parity = "O"
try:
self.open_port()
except Exception as e:
RNS.log("Could not open serial port for interface "+str(self), RNS.LOG_ERROR)
raise e
if self.serial.is_open:
self.configure_device()
else:
raise IOError("Could not open serial port")
def open_port(self):
RNS.log("Opening serial port "+self.port+"...")
# Get device parameters
from usb4a import usb
device = usb.get_usb_device(self.port)
if device:
vid = device.getVendorId()
pid = device.getProductId()
# Driver overrides for speficic chips
proxy = self.pyserial.get_serial_port
if vid == 0x1A86 and pid == 0x55D4:
# Force CDC driver for Qinheng CH34x
RNS.log(str(self)+" using CDC driver for "+RNS.hexrep(vid)+":"+RNS.hexrep(pid), RNS.LOG_DEBUG)
from usbserial4a.cdcacmserial4a import CdcAcmSerial
proxy = CdcAcmSerial
self.serial = proxy(
self.port,
baudrate = self.speed,
bytesize = self.databits,
parity = self.parity,
stopbits = self.stopbits,
xonxoff = False,
rtscts = False,
timeout = None,
inter_byte_timeout = None,
# write_timeout = wtimeout,
dsrdtr = False,
)
if vid == 0x0403:
# Hardware parameters for FTDI devices @ 115200 baud
self.serial.DEFAULT_READ_BUFFER_SIZE = 16 * 1024
self.serial.USB_READ_TIMEOUT_MILLIS = 100
self.serial.timeout = 0.1
elif vid == 0x10C4:
# Hardware parameters for SiLabs CP210x @ 115200 baud
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
self.serial.USB_READ_TIMEOUT_MILLIS = 12
self.serial.timeout = 0.012
elif vid == 0x1A86 and pid == 0x55D4:
# Hardware parameters for Qinheng CH34x @ 115200 baud
self.serial.DEFAULT_READ_BUFFER_SIZE = 64
self.serial.USB_READ_TIMEOUT_MILLIS = 12
self.serial.timeout = 0.1
else:
# Default values
self.serial.DEFAULT_READ_BUFFER_SIZE = 1 * 1024
self.serial.USB_READ_TIMEOUT_MILLIS = 100
self.serial.timeout = 0.1
RNS.log(str(self)+" USB read buffer size set to "+RNS.prettysize(self.serial.DEFAULT_READ_BUFFER_SIZE), RNS.LOG_DEBUG)
RNS.log(str(self)+" USB read timeout set to "+str(self.serial.USB_READ_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
RNS.log(str(self)+" USB write timeout set to "+str(self.serial.USB_WRITE_TIMEOUT_MILLIS)+"ms", RNS.LOG_DEBUG)
def configure_device(self):
sleep(0.5)
thread = threading.Thread(target=self.readLoop)
thread.daemon = True
thread.start()
self.online = True
RNS.log("Serial port "+self.port+" is now open", RNS.LOG_VERBOSE)
def processIncoming(self, data):
self.rxb += len(data)
def af():
self.owner.inbound(data, self)
threading.Thread(target=af, daemon=True).start()
def processOutgoing(self,data):
if self.online:
data = bytes([HDLC.FLAG])+HDLC.escape(data)+bytes([HDLC.FLAG])
written = self.serial.write(data)
self.txb += len(data)
if written != len(data):
raise IOError("Serial interface only wrote "+str(written)+" bytes of "+str(len(data)))
def readLoop(self):
try:
in_frame = False
escape = False
data_buffer = b""
last_read_ms = int(time.time()*1000)
while self.serial.is_open:
serial_bytes = self.serial.read()
got = len(serial_bytes)
for byte in serial_bytes:
last_read_ms = int(time.time()*1000)
if (in_frame and byte == HDLC.FLAG):
in_frame = False
self.processIncoming(data_buffer)
elif (byte == HDLC.FLAG):
in_frame = True
data_buffer = b""
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (byte == HDLC.ESC):
escape = True
else:
if (escape):
if (byte == HDLC.FLAG ^ HDLC.ESC_MASK):
byte = HDLC.FLAG
if (byte == HDLC.ESC ^ HDLC.ESC_MASK):
byte = HDLC.ESC
escape = False
data_buffer = data_buffer+bytes([byte])
if got == 0:
time_since_last = int(time.time()*1000) - last_read_ms
if len(data_buffer) > 0 and time_since_last > self.timeout:
data_buffer = b""
in_frame = False
escape = False
# sleep(0.08)
except Exception as e:
self.online = False
RNS.log("A serial port error occurred, the contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.log("The interface "+str(self)+" experienced an unrecoverable error and is now offline.", RNS.LOG_ERROR)
if RNS.Reticulum.panic_on_interface_error:
RNS.panic()
RNS.log("Reticulum will attempt to reconnect the interface periodically.", RNS.LOG_ERROR)
self.online = False
self.serial.close()
self.reconnect_port()
def reconnect_port(self):
while not self.online:
try:
time.sleep(5)
RNS.log("Attempting to reconnect serial port "+str(self.port)+" for "+str(self)+"...", RNS.LOG_VERBOSE)
self.open_port()
if self.serial.is_open:
self.configure_device()
except Exception as e:
RNS.log("Error while reconnecting port, the contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.log("Reconnected serial port for "+str(self))
def should_ingress_limit(self):
return False
def __str__(self):
return "SerialInterface["+self.name+"]"

View File

@ -0,0 +1,27 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import os
import glob
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]

View File

@ -21,8 +21,10 @@
# SOFTWARE.
from .Interface import Interface
from collections import deque
import socketserver
import threading
import re
import socket
import struct
import time
@ -41,25 +43,56 @@ class AutoInterface(Interface):
SCOPE_ORGANISATION = "8"
SCOPE_GLOBAL = "e"
MULTICAST_PERMANENT_ADDRESS_TYPE = "0"
MULTICAST_TEMPORARY_ADDRESS_TYPE = "1"
PEERING_TIMEOUT = 7.5
ALL_IGNORE_IFS = ["lo0"]
DARWIN_IGNORE_IFS = ["awdl0", "llw0", "lo0", "en5"]
ANDROID_IGNORE_IFS = ["dummy0", "lo", "tun0"]
BITRATE_GUESS = 10*1000*1000
def __init__(self, owner, name, group_id=None, discovery_scope=None, discovery_port=None, data_port=None, allowed_interfaces=None, ignored_interfaces=None, configured_bitrate=None):
import importlib
if importlib.util.find_spec('netifaces') != None:
import netifaces
else:
RNS.log("Using AutoInterface requires the netifaces module.", RNS.LOG_CRITICAL)
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
RNS.panic()
MULTI_IF_DEQUE_LEN = 48
MULTI_IF_DEQUE_TTL = 0.75
def handler_factory(self, callback):
def create_handler(*args, **keys):
return AutoInterfaceHandler(callback, *args, **keys)
return create_handler
def descope_linklocal(self, link_local_addr):
# Drop scope specifier expressd as %ifname (macOS)
link_local_addr = link_local_addr.split("%")[0]
# Drop embedded scope specifier (NetBSD, OpenBSD)
link_local_addr = re.sub(r"fe80:[0-9a-f]*::","fe80::", link_local_addr)
return link_local_addr
def list_interfaces(self):
ifs = self.netinfo.interfaces()
return ifs
def list_addresses(self, ifname):
ifas = self.netinfo.ifaddresses(ifname)
return ifas
def interface_name_to_index(self, ifname):
# socket.if_nametoindex doesn't work with uuid interface names on windows, it wants the ethernet_0 style
# we will just get the index from netinfo instead as it seems to work
if RNS.vendor.platformutils.is_windows():
return self.netinfo.interface_names_to_indexes()[ifname]
return socket.if_nametoindex(ifname)
def __init__(self, owner, name, group_id=None, discovery_scope=None, discovery_port=None, multicast_address_type=None, data_port=None, allowed_interfaces=None, ignored_interfaces=None, configured_bitrate=None):
from RNS.vendor.ifaddr import niwrapper
super().__init__()
self.netinfo = niwrapper
self.HW_MTU = 1064
self.netifaces = netifaces
self.rxb = 0
self.txb = 0
self.IN = True
self.OUT = False
self.name = name
@ -67,16 +100,26 @@ class AutoInterface(Interface):
self.peers = {}
self.link_local_addresses = []
self.adopted_interfaces = {}
self.interface_servers = {}
self.multicast_echoes = {}
self.timed_out_interfaces = {}
self.mif_deque = deque(maxlen=AutoInterface.MULTI_IF_DEQUE_LEN)
self.mif_deque_times = deque(maxlen=AutoInterface.MULTI_IF_DEQUE_LEN)
self.carrier_changed = False
self.outbound_udp_socket = None
self.announce_rate_target = None
self.announce_interval = AutoInterface.PEERING_TIMEOUT/6.0
self.peer_job_interval = AutoInterface.PEERING_TIMEOUT*1.1
self.peering_timeout = AutoInterface.PEERING_TIMEOUT
self.multicast_echo_timeout = AutoInterface.PEERING_TIMEOUT/2
# Increase peering timeout on Android, due to potential
# low-power modes implemented on many chipsets.
if RNS.vendor.platformutils.is_android():
self.peering_timeout *= 3
if allowed_interfaces == None:
self.allowed_interfaces = []
else:
@ -97,6 +140,15 @@ class AutoInterface(Interface):
else:
self.discovery_port = discovery_port
if multicast_address_type == None:
self.multicast_address_type = AutoInterface.MULTICAST_TEMPORARY_ADDRESS_TYPE
elif str(multicast_address_type).lower() == "temporary":
self.multicast_address_type = AutoInterface.MULTICAST_TEMPORARY_ADDRESS_TYPE
elif str(multicast_address_type).lower() == "permanent":
self.multicast_address_type = AutoInterface.MULTICAST_PERMANENT_ADDRESS_TYPE
else:
self.multicast_address_type = AutoInterface.MULTICAST_TEMPORARY_ADDRESS_TYPE
if data_port == None:
self.data_port = AutoInterface.DEFAULT_DATA_PORT
else:
@ -125,105 +177,129 @@ class AutoInterface(Interface):
gt += ":"+"{:02x}".format(g[9]+(g[8]<<8))
gt += ":"+"{:02x}".format(g[11]+(g[10]<<8))
gt += ":"+"{:02x}".format(g[13]+(g[12]<<8))
self.mcast_discovery_address = "ff1"+self.discovery_scope+":"+gt
self.mcast_discovery_address = "ff"+self.multicast_address_type+self.discovery_scope+":"+gt
suitable_interfaces = 0
for ifname in self.netifaces.interfaces():
if RNS.vendor.platformutils.is_darwin() and ifname in AutoInterface.DARWIN_IGNORE_IFS and not ifname in self.allowed_interfaces:
RNS.log(str(self)+" skipping Darwin AWDL or tethering interface "+str(ifname), RNS.LOG_EXTREME)
elif RNS.vendor.platformutils.is_darwin() and ifname == "lo0":
RNS.log(str(self)+" skipping Darwin loopback interface "+str(ifname), RNS.LOG_EXTREME)
elif RNS.vendor.platformutils.is_android() and ifname in AutoInterface.ANDROID_IGNORE_IFS and not ifname in self.allowed_interfaces:
RNS.log(str(self)+" skipping Android system interface "+str(ifname), RNS.LOG_EXTREME)
elif ifname in self.ignored_interfaces:
RNS.log(str(self)+" ignoring disallowed interface "+str(ifname), RNS.LOG_EXTREME)
else:
if len(self.allowed_interfaces) > 0 and not ifname in self.allowed_interfaces:
RNS.log(str(self)+" ignoring interface "+str(ifname)+" since it was not allowed", RNS.LOG_EXTREME)
for ifname in self.list_interfaces():
try:
if RNS.vendor.platformutils.is_darwin() and ifname in AutoInterface.DARWIN_IGNORE_IFS and not ifname in self.allowed_interfaces:
RNS.log(str(self)+" skipping Darwin AWDL or tethering interface "+str(ifname), RNS.LOG_EXTREME)
elif RNS.vendor.platformutils.is_darwin() and ifname == "lo0":
RNS.log(str(self)+" skipping Darwin loopback interface "+str(ifname), RNS.LOG_EXTREME)
elif RNS.vendor.platformutils.is_android() and ifname in AutoInterface.ANDROID_IGNORE_IFS and not ifname in self.allowed_interfaces:
RNS.log(str(self)+" skipping Android system interface "+str(ifname), RNS.LOG_EXTREME)
elif ifname in self.ignored_interfaces:
RNS.log(str(self)+" ignoring disallowed interface "+str(ifname), RNS.LOG_EXTREME)
elif ifname in AutoInterface.ALL_IGNORE_IFS:
RNS.log(str(self)+" skipping interface "+str(ifname), RNS.LOG_EXTREME)
else:
addresses = self.netifaces.ifaddresses(ifname)
if self.netifaces.AF_INET6 in addresses:
link_local_addr = None
for address in addresses[self.netifaces.AF_INET6]:
if "addr" in address:
if address["addr"].startswith("fe80:"):
link_local_addr = address["addr"]
self.link_local_addresses.append(link_local_addr.split("%")[0])
self.adopted_interfaces[ifname] = link_local_addr.split("%")[0]
self.multicast_echoes[ifname] = time.time()
RNS.log(str(self)+" Selecting link-local address "+str(link_local_addr)+" for interface "+str(ifname), RNS.LOG_EXTREME)
if len(self.allowed_interfaces) > 0 and not ifname in self.allowed_interfaces:
RNS.log(str(self)+" ignoring interface "+str(ifname)+" since it was not allowed", RNS.LOG_EXTREME)
else:
addresses = self.list_addresses(ifname)
if self.netinfo.AF_INET6 in addresses:
link_local_addr = None
for address in addresses[self.netinfo.AF_INET6]:
if "addr" in address:
if address["addr"].startswith("fe80:"):
link_local_addr = self.descope_linklocal(address["addr"])
self.link_local_addresses.append(link_local_addr)
self.adopted_interfaces[ifname] = link_local_addr
self.multicast_echoes[ifname] = time.time()
nice_name = self.netinfo.interface_name_to_nice_name(ifname)
if nice_name != None and nice_name != ifname:
RNS.log(f"{self} Selecting link-local address {link_local_addr} for interface {nice_name} / {ifname}", RNS.LOG_EXTREME)
else:
RNS.log(f"{self} Selecting link-local address {link_local_addr} for interface {ifname}", RNS.LOG_EXTREME)
if link_local_addr == None:
RNS.log(str(self)+" No link-local IPv6 address configured for "+str(ifname)+", skipping interface", RNS.LOG_EXTREME)
else:
mcast_addr = self.mcast_discovery_address
RNS.log(str(self)+" Creating multicast discovery listener on "+str(ifname)+" with address "+str(mcast_addr), RNS.LOG_EXTREME)
if link_local_addr == None:
RNS.log(str(self)+" No link-local IPv6 address configured for "+str(ifname)+", skipping interface", RNS.LOG_EXTREME)
else:
mcast_addr = self.mcast_discovery_address
RNS.log(str(self)+" Creating multicast discovery listener on "+str(ifname)+" with address "+str(mcast_addr), RNS.LOG_EXTREME)
# Struct with interface index
if_struct = struct.pack("I", socket.if_nametoindex(ifname))
# Struct with interface index
if_struct = struct.pack("I", self.interface_name_to_index(ifname))
# Set up multicast socket
discovery_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_IF, if_struct)
# Set up multicast socket
discovery_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if hasattr(socket, "SO_REUSEPORT"):
discovery_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_IF, if_struct)
# Join multicast group
mcast_group = socket.inet_pton(socket.AF_INET6, mcast_addr) + if_struct
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mcast_group)
# Join multicast group
mcast_group = socket.inet_pton(socket.AF_INET6, mcast_addr) + if_struct
discovery_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mcast_group)
# Bind socket
addr_info = socket.getaddrinfo(mcast_addr+"%"+ifname, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
discovery_socket.bind(addr_info[0][4])
# Bind socket
if RNS.vendor.platformutils.is_windows():
# Set up thread for discovery packets
def discovery_loop():
self.discovery_handler(discovery_socket, ifname)
# window throws "[WinError 10049] The requested address is not valid in its context"
# when trying to use the multicast address as host, or when providing interface index
# passing an empty host appears to work, but probably not exactly how we want it to...
discovery_socket.bind(('', self.discovery_port))
thread = threading.Thread(target=discovery_loop)
thread.setDaemon(True)
thread.start()
else:
suitable_interfaces += 1
if self.discovery_scope == AutoInterface.SCOPE_LINK:
addr_info = socket.getaddrinfo(mcast_addr+"%"+ifname, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
else:
addr_info = socket.getaddrinfo(mcast_addr, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
discovery_socket.bind(addr_info[0][4])
# Set up thread for discovery packets
def discovery_loop():
self.discovery_handler(discovery_socket, ifname)
thread = threading.Thread(target=discovery_loop)
thread.daemon = True
thread.start()
suitable_interfaces += 1
except Exception as e:
nice_name = self.netinfo.interface_name_to_nice_name(ifname)
if nice_name != None and nice_name != ifname:
RNS.log(f"Could not configure the system interface {nice_name} / {ifname} for use with {self}, skipping it. The contained exception was: {e}", RNS.LOG_ERROR)
else:
RNS.log(f"Could not configure the system interface {ifname} for use with {self}, skipping it. The contained exception was: {e}", RNS.LOG_ERROR)
if suitable_interfaces == 0:
RNS.log(str(self)+" could not autoconfigure. This interface currently provides no connectivity.", RNS.LOG_WARNING)
else:
self.receives = True
if configured_bitrate != None:
self.bitrate = configured_bitrate
else:
self.bitrate = AutoInterface.BITRATE_GUESS
peering_wait = self.announce_interval*1.2
RNS.log(str(self)+" discovering peers for "+str(round(peering_wait, 2))+" seconds...", RNS.LOG_VERBOSE)
def handlerFactory(callback):
def createHandler(*args, **keys):
return AutoInterfaceHandler(callback, *args, **keys)
return createHandler
self.owner = owner
socketserver.UDPServer.address_family = socket.AF_INET6
for ifname in self.adopted_interfaces:
local_addr = self.adopted_interfaces[ifname]+"%"+ifname
local_addr = self.adopted_interfaces[ifname]+"%"+str(self.interface_name_to_index(ifname))
addr_info = socket.getaddrinfo(local_addr, self.data_port, socket.AF_INET6, socket.SOCK_DGRAM)
address = addr_info[0][4]
self.server = socketserver.UDPServer(address, handlerFactory(self.processIncoming))
udp_server = socketserver.UDPServer(address, self.handler_factory(self.processIncoming))
self.interface_servers[ifname] = udp_server
thread = threading.Thread(target=self.server.serve_forever)
thread.setDaemon(True)
thread = threading.Thread(target=udp_server.serve_forever)
thread.daemon = True
thread.start()
job_thread = threading.Thread(target=self.peer_jobs)
job_thread.setDaemon(True)
job_thread.daemon = True
job_thread.start()
time.sleep(peering_wait)
if configured_bitrate != None:
self.bitrate = configured_bitrate
else:
self.bitrate = AutoInterface.BITRATE_GUESS
self.online = True
@ -232,7 +308,7 @@ class AutoInterface(Interface):
self.announce_handler(ifname)
thread = threading.Thread(target=announce_loop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
while True:
@ -262,16 +338,62 @@ class AutoInterface(Interface):
RNS.log(str(self)+" removed peer "+str(peer_addr)+" on "+str(removed_peer[0]), RNS.LOG_DEBUG)
for ifname in self.adopted_interfaces:
# Check that the link-local address has not changed
try:
addresses = self.list_addresses(ifname)
if self.netinfo.AF_INET6 in addresses:
link_local_addr = None
for address in addresses[self.netinfo.AF_INET6]:
if "addr" in address:
if address["addr"].startswith("fe80:"):
link_local_addr = self.descope_linklocal(address["addr"])
if link_local_addr != self.adopted_interfaces[ifname]:
old_link_local_address = self.adopted_interfaces[ifname]
RNS.log("Replacing link-local address "+str(old_link_local_address)+" for "+str(ifname)+" with "+str(link_local_addr), RNS.LOG_DEBUG)
self.adopted_interfaces[ifname] = link_local_addr
self.link_local_addresses.append(link_local_addr)
if old_link_local_address in self.link_local_addresses:
self.link_local_addresses.remove(old_link_local_address)
local_addr = link_local_addr+"%"+ifname
addr_info = socket.getaddrinfo(local_addr, self.data_port, socket.AF_INET6, socket.SOCK_DGRAM)
listen_address = addr_info[0][4]
if ifname in self.interface_servers:
RNS.log("Shutting down previous UDP listener for "+str(self)+" "+str(ifname), RNS.LOG_DEBUG)
previous_server = self.interface_servers[ifname]
def shutdown_server():
previous_server.shutdown()
threading.Thread(target=shutdown_server, daemon=True).start()
RNS.log("Starting new UDP listener for "+str(self)+" "+str(ifname), RNS.LOG_DEBUG)
udp_server = socketserver.UDPServer(listen_address, self.handler_factory(self.processIncoming))
self.interface_servers[ifname] = udp_server
thread = threading.Thread(target=udp_server.serve_forever)
thread.daemon = True
thread.start()
self.carrier_changed = True
except Exception as e:
RNS.log("Could not get device information while updating link-local addresses for "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
# Check multicast echo timeouts
last_multicast_echo = 0
if ifname in self.multicast_echoes:
last_multicast_echo = self.multicast_echoes[ifname]
if now - last_multicast_echo > self.multicast_echo_timeout:
if ifname in self.timed_out_interfaces and self.timed_out_interfaces[ifname] == False:
self.carrier_changed = True
RNS.log("Multicast echo timeout for "+str(ifname)+". Carrier lost.", RNS.LOG_WARNING)
self.timed_out_interfaces[ifname] = True
else:
if ifname in self.timed_out_interfaces and self.timed_out_interfaces[ifname] == True:
self.carrier_changed = True
RNS.log(str(self)+" Carrier recovered on "+str(ifname), RNS.LOG_WARNING)
self.timed_out_interfaces[ifname] = False
@ -288,9 +410,11 @@ class AutoInterface(Interface):
announce_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
addr_info = socket.getaddrinfo(self.mcast_discovery_address, self.discovery_port, socket.AF_INET6, socket.SOCK_DGRAM)
ifis = struct.pack("I", socket.if_nametoindex(ifname))
ifis = struct.pack("I", self.interface_name_to_index(ifname))
announce_socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_MULTICAST_IF, ifis)
announce_socket.sendto(discovery_token, addr_info[0][4])
announce_socket.close()
except Exception as e:
if (ifname in self.timed_out_interfaces and self.timed_out_interfaces[ifname] == False) or not ifname in self.timed_out_interfaces:
RNS.log(str(self)+" Detected possible carrier loss on "+str(ifname)+": "+str(e), RNS.LOG_WARNING)
@ -320,18 +444,30 @@ class AutoInterface(Interface):
self.peers[addr][1] = time.time()
def processIncoming(self, data):
self.rxb += len(data)
self.owner.inbound(data, self)
data_hash = RNS.Identity.full_hash(data)
deque_hit = False
if data_hash in self.mif_deque:
for te in self.mif_deque_times:
if te[0] == data_hash and time.time() < te[1]+AutoInterface.MULTI_IF_DEQUE_TTL:
deque_hit = True
break
if not deque_hit:
self.mif_deque.append(data_hash)
self.mif_deque_times.append([data_hash, time.time()])
self.rxb += len(data)
self.owner.inbound(data, self)
def processOutgoing(self,data):
for peer in self.peers:
try:
if self.outbound_udp_socket == None:
self.outbound_udp_socket = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
peer_addr = str(peer)+"%"+str(self.peers[peer][0])
peer_addr = str(peer)+"%"+str(self.interface_name_to_index(self.peers[peer][0]))
addr_info = socket.getaddrinfo(peer_addr, self.data_port, socket.AF_INET6, socket.SOCK_DGRAM)
self.outbound_udp_socket.sendto(data, addr_info[0][4])
except Exception as e:
RNS.log("Could not transmit on "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
@ -339,6 +475,11 @@ class AutoInterface(Interface):
self.txb += len(data)
# Until per-device sub-interfacing is implemented,
# ingress limiting should be disabled on AutoInterface
def should_ingress_limit(self):
return False
def __str__(self):
return "AutoInterface["+self.name+"]"

View File

@ -72,10 +72,12 @@ class I2PController:
self.client_tunnels = {}
self.server_tunnels = {}
self.i2plib_tunnels = {}
self.loop = None
self.i2plib = i2plib
self.utils = i2plib.utils
self.sam_address = i2plib.get_sam_address()
self.ready = False
self.storagepath = rns_storagepath+"/i2p"
if not os.path.isdir(self.storagepath):
@ -85,17 +87,35 @@ class I2PController:
def start(self):
asyncio.set_event_loop(asyncio.new_event_loop())
self.loop = asyncio.get_event_loop()
time.sleep(0.10)
if self.loop == None:
RNS.log("Could not get event loop for "+str(self)+", waiting for event loop to appear", RNS.LOG_VERBOSE)
while self.loop == None:
self.loop = asyncio.get_event_loop()
sleep(0.25)
try:
self.ready = True
self.loop.run_forever()
except Exception as e:
self.ready = False
RNS.log("Exception on event loop for "+str(self)+": "+str(e), RNS.LOG_ERROR)
finally:
self.loop.close()
def stop(self):
for task in asyncio.Task.all_tasks(loop=self.loop):
task.cancel()
for i2ptunnel in self.i2plib_tunnels:
if hasattr(i2ptunnel, "stop") and callable(i2ptunnel.stop):
i2ptunnel.stop()
if hasattr(asyncio.Task, "all_tasks") and callable(asyncio.Task.all_tasks):
for task in asyncio.Task.all_tasks(loop=self.loop):
task.cancel()
time.sleep(0.2)
self.loop.stop()
@ -104,8 +124,13 @@ class I2PController:
return self.i2plib.utils.get_free_port()
def stop_tunnel(self, i2ptunnel):
if hasattr(i2ptunnel, "stop") and callable(i2ptunnel.stop):
i2ptunnel.stop()
def client_tunnel(self, owner, i2p_destination):
self.client_tunnels[i2p_destination] = False
self.i2plib_tunnels[i2p_destination] = None
while True:
if not self.client_tunnels[i2p_destination]:
@ -113,29 +138,138 @@ class I2PController:
async def tunnel_up():
RNS.log("Bringing up I2P tunnel to "+str(owner)+", this may take a while...", RNS.LOG_INFO)
tunnel = self.i2plib.ClientTunnel(i2p_destination, owner.local_addr, sam_address=self.sam_address, loop=self.loop)
self.i2plib_tunnels[i2p_destination] = tunnel
await tunnel.run()
owner.awaiting_i2p_tunnel = False
RNS.log(str(owner)+ " tunnel setup complete", RNS.LOG_VERBOSE)
try:
self.loop.ext_owner = self
future = asyncio.run_coroutine_threadsafe(tunnel_up(), self.loop).result()
self.client_tunnels[i2p_destination] = True
self.loop.ext_owner = self
result = asyncio.run_coroutine_threadsafe(tunnel_up(), self.loop).result()
if not i2p_destination in self.i2plib_tunnels:
raise IOError("No tunnel control instance was created")
except Exception as e:
RNS.log("Error while setting up I2P tunnel: "+str(e))
raise e
else:
tn = self.i2plib_tunnels[i2p_destination]
if tn != None and hasattr(tn, "status"):
RNS.log("Waiting for status from I2P control process", RNS.LOG_EXTREME)
while not tn.status["setup_ran"]:
time.sleep(0.1)
RNS.log("Got status from I2P control process", RNS.LOG_EXTREME)
if tn.status["setup_failed"]:
self.stop_tunnel(tn)
raise tn.status["exception"]
else:
if owner.socket != None:
if hasattr(owner.socket, "close"):
if callable(owner.socket.close):
try:
owner.socket.shutdown(socket.SHUT_RDWR)
except Exception as e:
RNS.log("Error while shutting down socket for "+str(owner)+": "+str(e))
try:
owner.socket.close()
except Exception as e:
RNS.log("Error while closing socket for "+str(owner)+": "+str(e))
self.client_tunnels[i2p_destination] = True
owner.awaiting_i2p_tunnel = False
RNS.log(str(owner)+" tunnel setup complete", RNS.LOG_VERBOSE)
else:
raise IOError("Got no status response from SAM API")
except ConnectionRefusedError as e:
raise e
except ConnectionAbortedError as e:
raise e
except Exception as e:
raise IOError("Could not connect to I2P SAM API while configuring to "+str(owner)+". Check that I2P is running and SAM is enabled.")
RNS.log("Unexpected error type from I2P SAM: "+str(e), RNS.LOG_ERROR)
raise e
else:
i2ptunnel = self.i2plib_tunnels[i2p_destination]
if hasattr(i2ptunnel, "status"):
i2p_exception = i2ptunnel.status["exception"]
if i2ptunnel.status["setup_ran"] == False:
RNS.log(str(self)+" I2P tunnel setup did not complete", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
elif i2p_exception != None:
RNS.log("An error ocurred while setting up I2P tunnel to "+str(i2p_destination), RNS.LOG_ERROR)
if isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.CantReachPeer):
RNS.log("The I2P daemon can't reach peer "+str(i2p_destination), RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedDest):
RNS.log("The I2P daemon reported that the destination is already in use", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedId):
RNS.log("The I2P daemon reported that the ID is arleady in use", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidId):
RNS.log("The I2P daemon reported that the stream session ID doesn't exist", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidKey):
RNS.log("The I2P daemon reported that the key for "+str(i2p_destination)+" is invalid", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.KeyNotFound):
RNS.log("The I2P daemon could not find the key for "+str(i2p_destination), RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.PeerNotFound):
RNS.log("The I2P daemon mould not find the peer "+str(i2p_destination), RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.I2PError):
RNS.log("The I2P daemon experienced an unspecified error", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.Timeout):
RNS.log("I2P daemon timed out while setting up client tunnel to "+str(i2p_destination), RNS.LOG_ERROR)
RNS.log("Resetting I2P tunnel and retrying later", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
elif i2ptunnel.status["setup_failed"] == True:
RNS.log(str(self)+" Unspecified I2P tunnel setup error, resetting I2P tunnel", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
else:
RNS.log(str(self)+" Got no status from SAM API, resetting I2P tunnel", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
# Wait for status from I2P control process
time.sleep(5)
def server_tunnel(self, owner):
i2p_dest_hash = RNS.Identity.full_hash(RNS.Identity.full_hash(owner.name.encode("utf-8")))
i2p_keyfile = self.storagepath+"/"+RNS.hexrep(i2p_dest_hash, delimit=False)+".i2p"
while RNS.Transport.identity == None:
time.sleep(1)
# Old format
i2p_dest_hash_of = RNS.Identity.full_hash(RNS.Identity.full_hash(owner.name.encode("utf-8")))
i2p_keyfile_of = self.storagepath+"/"+RNS.hexrep(i2p_dest_hash_of, delimit=False)+".i2p"
# New format
i2p_dest_hash_nf = RNS.Identity.full_hash(RNS.Identity.full_hash(owner.name.encode("utf-8"))+RNS.Identity.full_hash(RNS.Transport.identity.hash))
i2p_keyfile_nf = self.storagepath+"/"+RNS.hexrep(i2p_dest_hash_nf, delimit=False)+".i2p"
# Use old format if a key is already present
if os.path.isfile(i2p_keyfile_of):
i2p_keyfile = i2p_keyfile_of
else:
i2p_keyfile = i2p_keyfile_nf
i2p_dest = None
if not os.path.isfile(i2p_keyfile):
@ -154,20 +288,82 @@ class I2PController:
owner.b32 = i2p_b32
self.server_tunnels[i2p_b32] = False
self.i2plib_tunnels[i2p_b32] = None
while self.server_tunnels[i2p_b32] == False:
try:
async def tunnel_up():
RNS.log(str(owner)+" Bringing up I2P endpoint, this may take a while...", RNS.LOG_INFO)
tunnel = self.i2plib.ServerTunnel((owner.bind_ip, owner.bind_port), loop=self.loop, destination=i2p_dest, sam_address=self.sam_address)
await tunnel.run()
RNS.log(str(owner)+ " endpoint setup complete. Now reachable at: "+str(i2p_dest.base32)+".b32.i2p", RNS.LOG_VERBOSE)
while True:
if self.server_tunnels[i2p_b32] == False:
try:
async def tunnel_up():
RNS.log(str(owner)+" Bringing up I2P endpoint, this may take a while...", RNS.LOG_INFO)
tunnel = self.i2plib.ServerTunnel((owner.bind_ip, owner.bind_port), loop=self.loop, destination=i2p_dest, sam_address=self.sam_address)
self.i2plib_tunnels[i2p_b32] = tunnel
await tunnel.run()
owner.online = True
RNS.log(str(owner)+ " endpoint setup complete. Now reachable at: "+str(i2p_dest.base32)+".b32.i2p", RNS.LOG_VERBOSE)
asyncio.run_coroutine_threadsafe(tunnel_up(), self.loop).result()
self.server_tunnels[i2p_b32] = True
asyncio.run_coroutine_threadsafe(tunnel_up(), self.loop).result()
self.server_tunnels[i2p_b32] = True
except Exception as e:
raise IOError("Could not connect to I2P SAM API while configuring "+str(self)+". Check that I2P is running and SAM is enabled.")
except Exception as e:
raise e
else:
i2ptunnel = self.i2plib_tunnels[i2p_b32]
if hasattr(i2ptunnel, "status"):
i2p_exception = i2ptunnel.status["exception"]
if i2ptunnel.status["setup_ran"] == False:
RNS.log(str(self)+" I2P tunnel setup did not complete", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
elif i2p_exception != None:
RNS.log("An error ocurred while setting up I2P tunnel", RNS.LOG_ERROR)
if isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.CantReachPeer):
RNS.log("The I2P daemon can't reach peer "+str(i2p_destination), RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedDest):
RNS.log("The I2P daemon reported that the destination is already in use", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.DuplicatedId):
RNS.log("The I2P daemon reported that the ID is arleady in use", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidId):
RNS.log("The I2P daemon reported that the stream session ID doesn't exist", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.InvalidKey):
RNS.log("The I2P daemon reported that the key for "+str(i2p_destination)+" is invalid", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.KeyNotFound):
RNS.log("The I2P daemon could not find the key for "+str(i2p_destination), RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.PeerNotFound):
RNS.log("The I2P daemon mould not find the peer "+str(i2p_destination), RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.I2PError):
RNS.log("The I2P daemon experienced an unspecified error", RNS.LOG_ERROR)
elif isinstance(i2p_exception, RNS.vendor.i2plib.exceptions.Timeout):
RNS.log("I2P daemon timed out while setting up client tunnel to "+str(i2p_destination), RNS.LOG_ERROR)
RNS.log("Resetting I2P tunnel and retrying later", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
elif i2ptunnel.status["setup_failed"] == True:
RNS.log(str(self)+" Unspecified I2P tunnel setup error, resetting I2P tunnel", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
else:
RNS.log(str(self)+" Got no status from SAM API, resetting I2P tunnel", RNS.LOG_ERROR)
self.stop_tunnel(i2ptunnel)
return False
time.sleep(5)
@ -183,14 +379,20 @@ class I2PInterfacePeer(Interface):
RECONNECT_MAX_TRIES = None
# TCP socket options
I2P_USER_TIMEOUT = 40
I2P_USER_TIMEOUT = 45
I2P_PROBE_AFTER = 10
I2P_PROBE_INTERVAL = 5
I2P_PROBES = 6
I2P_PROBE_INTERVAL = 9
I2P_PROBES = 5
I2P_READ_TIMEOUT = (I2P_PROBE_INTERVAL * I2P_PROBES + I2P_PROBE_AFTER)*2
TUNNEL_STATE_INIT = 0x00
TUNNEL_STATE_ACTIVE = 0x01
TUNNEL_STATE_STALE = 0x02
def __init__(self, parent_interface, owner, name, target_i2p_dest=None, connected_socket=None, max_reconnect_tries=None):
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 1064
self.IN = True
self.OUT = False
@ -211,6 +413,34 @@ class I2PInterfacePeer(Interface):
self.i2p_tunnel_ready = False
self.mode = RNS.Interfaces.Interface.Interface.MODE_FULL
self.bitrate = I2PInterface.BITRATE_GUESS
self.last_read = 0
self.last_write = 0
self.wd_reset = False
self.i2p_tunnel_state = I2PInterfacePeer.TUNNEL_STATE_INIT
self.ifac_size = self.parent_interface.ifac_size
self.ifac_netname = self.parent_interface.ifac_netname
self.ifac_netkey = self.parent_interface.ifac_netkey
if self.ifac_netname != None or self.ifac_netkey != None:
ifac_origin = b""
if self.ifac_netname != None:
ifac_origin += RNS.Identity.full_hash(self.ifac_netname.encode("utf-8"))
if self.ifac_netkey != None:
ifac_origin += RNS.Identity.full_hash(self.ifac_netkey.encode("utf-8"))
ifac_origin_hash = RNS.Identity.full_hash(ifac_origin)
self.ifac_key = RNS.Cryptography.hkdf(
length=64,
derive_from=ifac_origin_hash,
salt=RNS.Reticulum.IFAC_SALT,
context=None
)
self.ifac_identity = RNS.Identity.from_bytes(self.ifac_key)
self.ifac_signature = self.ifac_identity.sign(RNS.Identity.full_hash(self.ifac_key))
self.announce_rate_target = None
self.announce_rate_grace = None
self.announce_rate_penalty = None
if max_reconnect_tries == None:
self.max_reconnect_tries = I2PInterfacePeer.RECONNECT_MAX_TRIES
@ -233,54 +463,59 @@ class I2PInterfacePeer(Interface):
self.initiator = True
self.bind_ip = "127.0.0.1"
self.bind_port = self.parent_interface.i2p.get_free_port()
self.local_addr = (self.bind_ip, self.bind_port)
self.target_ip = self.bind_ip
self.target_port = self.bind_port
self.awaiting_i2p_tunnel = True
def tunnel_job():
self.parent_interface.i2p.client_tunnel(self, target_i2p_dest)
while self.awaiting_i2p_tunnel:
try:
self.bind_port = self.parent_interface.i2p.get_free_port()
self.local_addr = (self.bind_ip, self.bind_port)
self.target_ip = self.bind_ip
self.target_port = self.bind_port
if not self.parent_interface.i2p.client_tunnel(self, target_i2p_dest):
RNS.log(str(self)+" I2P control process experienced an error, requesting new tunnel...", RNS.LOG_ERROR)
self.awaiting_i2p_tunnel = True
except Exception as e:
RNS.log("Error while while configuring "+str(self)+": "+str(e), RNS.LOG_ERROR)
RNS.log("Check that I2P is installed and running, and that SAM is enabled. Retrying tunnel setup later.", RNS.LOG_ERROR)
time.sleep(8)
thread = threading.Thread(target=tunnel_job)
thread.setDaemon(True)
thread.daemon = True
thread.start()
def wait_job():
while self.awaiting_i2p_tunnel:
time.sleep(0.25)
time.sleep(2)
if not self.kiss_framing:
self.wants_tunnel = True
if not self.connect(initial=True):
thread = threading.Thread(target=self.reconnect)
thread.setDaemon(True)
thread.daemon = True
thread.start()
else:
thread = threading.Thread(target=self.read_loop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
thread = threading.Thread(target=wait_job)
thread.setDaemon(True)
thread.daemon = True
thread.start()
def set_timeouts_linux(self):
if not self.i2p_tunneled:
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(I2PInterfacePeer.TCP_USER_TIMEOUT * 1000))
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(I2PInterfacePeer.TCP_PROBE_AFTER))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(I2PInterfacePeer.TCP_PROBE_INTERVAL))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(I2PInterfacePeer.TCP_PROBES))
else:
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(I2PInterfacePeer.I2P_USER_TIMEOUT * 1000))
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(I2PInterfacePeer.I2P_PROBE_INTERVAL))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(I2PInterfacePeer.I2P_PROBES))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(I2PInterfacePeer.I2P_USER_TIMEOUT * 1000))
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(I2PInterfacePeer.I2P_PROBE_INTERVAL))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(I2PInterfacePeer.I2P_PROBES))
def set_timeouts_osx(self):
if hasattr(socket, "TCP_KEEPALIVE"):
@ -289,17 +524,27 @@ class I2PInterfacePeer(Interface):
TCP_KEEPIDLE = 0x10
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
if not self.i2p_tunneled:
self.socket.setsockopt(socket.IPPROTO_TCP, TCP_KEEPIDLE, int(I2PInterfacePeer.TCP_PROBE_AFTER))
else:
self.socket.setsockopt(socket.IPPROTO_TCP, TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
self.socket.setsockopt(socket.IPPROTO_TCP, TCP_KEEPIDLE, int(I2PInterfacePeer.I2P_PROBE_AFTER))
def shutdown_socket(self, target_socket):
if callable(target_socket.close):
try:
if socket != None:
target_socket.shutdown(socket.SHUT_RDWR)
except Exception as e:
RNS.log("Error while shutting down socket for "+str(self)+": "+str(e))
try:
if socket != None:
target_socket.close()
except Exception as e:
RNS.log("Error while closing socket for "+str(self)+": "+str(e))
def detach(self):
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
if self.socket != None:
if hasattr(self.socket, "close"):
if callable(self.socket.close):
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
self.detached = True
try:
@ -345,7 +590,6 @@ class I2PInterfacePeer(Interface):
return True
def reconnect(self):
if self.initiator:
if not self.reconnecting:
@ -374,7 +618,7 @@ class I2PInterfacePeer(Interface):
self.reconnecting = False
thread = threading.Thread(target=self.read_loop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
if not self.kiss_framing:
RNS.Transport.synthesize_tunnel(self)
@ -393,7 +637,7 @@ class I2PInterfacePeer(Interface):
def processOutgoing(self, data):
if self.online:
while self.writing:
time.sleep(0.01)
time.sleep(0.001)
try:
self.writing = True
@ -406,6 +650,8 @@ class I2PInterfacePeer(Interface):
self.socket.sendall(data)
self.writing = False
self.txb += len(data)
self.last_write = time.time()
if hasattr(self, "parent_interface") and self.parent_interface != None and self.parent_count:
self.parent_interface.txb += len(data)
@ -415,8 +661,59 @@ class I2PInterfacePeer(Interface):
self.teardown()
def read_watchdog(self):
while self.wd_reset:
time.sleep(0.25)
should_run = True
try:
while should_run and not self.wd_reset:
time.sleep(1)
if (time.time()-self.last_read > I2PInterfacePeer.I2P_PROBE_AFTER*2):
if self.i2p_tunnel_state != I2PInterfacePeer.TUNNEL_STATE_STALE:
RNS.log("I2P tunnel became unresponsive", RNS.LOG_DEBUG)
self.i2p_tunnel_state = I2PInterfacePeer.TUNNEL_STATE_STALE
else:
self.i2p_tunnel_state = I2PInterfacePeer.TUNNEL_STATE_ACTIVE
if (time.time()-self.last_write > I2PInterfacePeer.I2P_PROBE_AFTER*1):
try:
if self.socket != None:
self.socket.sendall(bytes([HDLC.FLAG, HDLC.FLAG]))
except Exception as e:
RNS.log("An error ocurred while sending I2P keepalive. The contained exception was: "+str(e), RNS.LOG_ERROR)
self.shutdown_socket(self.socket)
should_run = False
if (time.time()-self.last_read > I2PInterfacePeer.I2P_READ_TIMEOUT):
RNS.log("I2P socket is unresponsive, restarting...", RNS.LOG_WARNING)
if self.socket != None:
try:
self.socket.shutdown(socket.SHUT_RDWR)
except Exception as e:
RNS.log("Error while shutting down socket for "+str(self)+": "+str(e))
try:
self.socket.close()
except Exception as e:
RNS.log("Error while closing socket for "+str(self)+": "+str(e))
should_run = False
self.wd_reset = False
finally:
self.wd_reset = False
def read_loop(self):
try:
self.last_read = time.time()
self.last_write = time.time()
wd_thread = threading.Thread(target=self.read_watchdog, daemon=True).start()
in_frame = False
escape = False
data_buffer = b""
@ -426,6 +723,7 @@ class I2PInterfacePeer(Interface):
data_in = self.socket.recv(4096)
if len(data_in) > 0:
pointer = 0
self.last_read = time.time()
while pointer < len(data_in):
byte = data_in[pointer]
pointer += 1
@ -439,7 +737,7 @@ class I2PInterfacePeer(Interface):
in_frame = True
command = KISS.CMD_UNKNOWN
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (len(data_buffer) == 0 and command == KISS.CMD_UNKNOWN):
# We only support one HDLC port for now, so
# strip off the port nibble
@ -465,7 +763,7 @@ class I2PInterfacePeer(Interface):
elif (byte == HDLC.FLAG):
in_frame = True
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (byte == HDLC.ESC):
escape = True
else:
@ -478,6 +776,11 @@ class I2PInterfacePeer(Interface):
data_buffer = data_buffer+bytes([byte])
else:
self.online = False
self.wd_reset = True
time.sleep(2)
self.wd_reset = False
if self.initiator and not self.detached:
RNS.log("Socket for "+str(self)+" was closed, attempting to reconnect...", RNS.LOG_WARNING)
self.reconnect()
@ -512,7 +815,8 @@ class I2PInterfacePeer(Interface):
self.IN = False
if hasattr(self, "parent_interface") and self.parent_interface != None:
self.parent_interface.clients -= 1
if self.parent_interface.clients > 0:
self.parent_interface.clients -= 1
if self in RNS.Transport.interfaces:
if not self.initiator:
@ -526,9 +830,11 @@ class I2PInterfacePeer(Interface):
class I2PInterface(Interface):
BITRATE_GUESS = 256*1000
def __init__(self, owner, name, rns_storagepath, peers, connectable = True):
self.rxb = 0
self.txb = 0
def __init__(self, owner, name, rns_storagepath, peers, connectable = False, ifac_size = 16, ifac_netname = None, ifac_netkey = None):
super().__init__()
self.HW_MTU = 1064
self.online = False
self.clients = 0
self.owner = owner
@ -549,11 +855,29 @@ class I2PInterface(Interface):
self.bind_port = self.i2p.get_free_port()
self.address = (self.bind_ip, self.bind_port)
self.bitrate = I2PInterface.BITRATE_GUESS
self.ifac_size = ifac_size
self.ifac_netname = ifac_netname
self.ifac_netkey = ifac_netkey
self.online = False
i2p_thread = threading.Thread(target=self.i2p.start)
i2p_thread.setDaemon(True)
i2p_thread.daemon = True
i2p_thread.start()
i2p_notready_warning = False
time.sleep(0.25)
if not self.i2p.ready:
RNS.log("I2P controller did not become available in time, waiting for controller", RNS.LOG_VERBOSE)
i2p_notready_warning = True
while not self.i2p.ready:
time.sleep(0.25)
if i2p_notready_warning == True:
RNS.log("I2P controller ready, continuing setup", RNS.LOG_VERBOSE)
def handlerFactory(callback):
def createHandler(*args, **keys):
return I2PInterfaceHandler(callback, *args, **keys)
@ -563,20 +887,31 @@ class I2PInterface(Interface):
self.server = ThreadingI2PServer(self.address, handlerFactory(self.incoming_connection))
thread = threading.Thread(target=self.server.serve_forever)
thread.setDaemon(True)
thread.daemon = True
thread.start()
if self.connectable:
def tunnel_job():
self.i2p.server_tunnel(self)
while True:
try:
if not self.i2p.server_tunnel(self):
RNS.log(str(self)+" I2P control process experienced an error, requesting new tunnel...", RNS.LOG_ERROR)
self.online = False
except Exception as e:
RNS.log("Error while while configuring "+str(self)+": "+str(e), RNS.LOG_ERROR)
RNS.log("Check that I2P is installed and running, and that SAM is enabled. Retrying tunnel setup later.", RNS.LOG_ERROR)
time.sleep(15)
thread = threading.Thread(target=tunnel_job)
thread.setDaemon(True)
thread.daemon = True
thread.start()
if peers != None:
for peer_addr in peers:
interface_name = peer_addr
interface_name = self.name+" to "+peer_addr
peer_interface = I2PInterfacePeer(self, self.owner, interface_name, peer_addr)
peer_interface.OUT = True
peer_interface.IN = True
@ -584,9 +919,6 @@ class I2PInterface(Interface):
peer_interface.parent_count = False
RNS.Transport.interfaces.append(peer_interface)
self.online = True
def incoming_connection(self, handler):
RNS.log("Accepting incoming I2P connection", RNS.LOG_VERBOSE)
interface_name = "Connected peer on "+self.name
@ -596,12 +928,32 @@ class I2PInterface(Interface):
spawned_interface.parent_interface = self
spawned_interface.online = True
spawned_interface.bitrate = self.bitrate
spawned_interface.ifac_size = self.ifac_size
spawned_interface.ifac_netname = self.ifac_netname
spawned_interface.ifac_netkey = self.ifac_netkey
if spawned_interface.ifac_netname != None or spawned_interface.ifac_netkey != None:
ifac_origin = b""
if spawned_interface.ifac_netname != None:
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netname.encode("utf-8"))
if spawned_interface.ifac_netkey != None:
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netkey.encode("utf-8"))
ifac_origin_hash = RNS.Identity.full_hash(ifac_origin)
spawned_interface.ifac_key = RNS.Cryptography.hkdf(
length=64,
derive_from=ifac_origin_hash,
salt=RNS.Reticulum.IFAC_SALT,
context=None
)
spawned_interface.ifac_identity = RNS.Identity.from_bytes(spawned_interface.ifac_key)
spawned_interface.ifac_signature = spawned_interface.ifac_identity.sign(RNS.Identity.full_hash(spawned_interface.ifac_key))
spawned_interface.announce_rate_target = self.announce_rate_target
spawned_interface.announce_rate_grace = self.announce_rate_grace
spawned_interface.announce_rate_penalty = self.announce_rate_penalty
spawned_interface.mode = self.mode
spawned_interface.HW_MTU = self.HW_MTU
RNS.log("Spawned new I2PInterface Peer: "+str(spawned_interface), RNS.LOG_VERBOSE)
RNS.Transport.interfaces.append(spawned_interface)
self.clients += 1
@ -610,7 +962,14 @@ class I2PInterface(Interface):
def processOutgoing(self, data):
pass
def received_announce(self, from_spawned=False):
if from_spawned: self.ia_freq_deque.append(time.time())
def sent_announce(self, from_spawned=False):
if from_spawned: self.oa_freq_deque.append(time.time())
def detach(self):
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
self.i2p.stop()
def __str__(self):

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -23,6 +23,7 @@
import RNS
import time
import threading
from collections import deque
class Interface:
IN = False
@ -31,20 +32,169 @@ class Interface:
RPT = False
name = None
# Interface mode definitions
MODE_FULL = 0x01
MODE_POINT_TO_POINT = 0x02
MODE_ACCESS_POINT = 0x03
MODE_ROAMING = 0x04
MODE_BOUNDARY = 0x05
MODE_GATEWAY = 0x06
# Which interface modes a Transport Node should
# actively discover paths for.
DISCOVER_PATHS_FOR = [MODE_ACCESS_POINT, MODE_GATEWAY, MODE_ROAMING]
# How many samples to use for announce
# frequency calculations
IA_FREQ_SAMPLES = 6
OA_FREQ_SAMPLES = 6
# Maximum amount of ingress limited announces
# to hold at any given time.
MAX_HELD_ANNOUNCES = 256
# How long a spawned interface will be
# considered to be newly created. Two
# hours by default.
IC_NEW_TIME = 2*60*60
IC_BURST_FREQ_NEW = 3.5
IC_BURST_FREQ = 12
IC_BURST_HOLD = 1*60
IC_BURST_PENALTY = 5*60
IC_HELD_RELEASE_INTERVAL = 30
def __init__(self):
self.rxb = 0
self.txb = 0
self.created = time.time()
self.online = False
self.bitrate = 1e6
self.ingress_control = True
self.ic_max_held_announces = Interface.MAX_HELD_ANNOUNCES
self.ic_burst_hold = Interface.IC_BURST_HOLD
self.ic_burst_active = False
self.ic_burst_activated = 0
self.ic_held_release = 0
self.ic_burst_freq_new = Interface.IC_BURST_FREQ_NEW
self.ic_burst_freq = Interface.IC_BURST_FREQ
self.ic_new_time = Interface.IC_NEW_TIME
self.ic_burst_penalty = Interface.IC_BURST_PENALTY
self.ic_held_release_interval = Interface.IC_HELD_RELEASE_INTERVAL
self.held_announces = {}
self.ia_freq_deque = deque(maxlen=Interface.IA_FREQ_SAMPLES)
self.oa_freq_deque = deque(maxlen=Interface.OA_FREQ_SAMPLES)
def get_hash(self):
return RNS.Identity.full_hash(str(self).encode("utf-8"))
# This is a generic function for determining when an interface
# should activate ingress limiting. Since this can vary for
# different interface types, this function should be overwritten
# in case a particular interface requires a different approach.
def should_ingress_limit(self):
if self.ingress_control:
freq_threshold = self.ic_burst_freq_new if self.age() < self.ic_new_time else self.ic_burst_freq
ia_freq = self.incoming_announce_frequency()
if self.ic_burst_active:
if ia_freq < freq_threshold and time.time() > self.ic_burst_activated+self.ic_burst_hold:
self.ic_burst_active = False
self.ic_held_release = time.time() + self.ic_burst_penalty
return True
else:
if ia_freq > freq_threshold:
self.ic_burst_active = True
self.ic_burst_activated = time.time()
return True
else:
return False
else:
return False
def age(self):
return time.time()-self.created
def hold_announce(self, announce_packet):
if announce_packet.destination_hash in self.held_announces:
self.held_announces[announce_packet.destination_hash] = announce_packet
elif not len(self.held_announces) >= self.ic_max_held_announces:
self.held_announces[announce_packet.destination_hash] = announce_packet
def process_held_announces(self):
try:
if not self.should_ingress_limit() and len(self.held_announces) > 0 and time.time() > self.ic_held_release:
freq_threshold = self.ic_burst_freq_new if self.age() < self.ic_new_time else self.ic_burst_freq
ia_freq = self.incoming_announce_frequency()
if ia_freq < freq_threshold:
selected_announce_packet = None
min_hops = RNS.Transport.PATHFINDER_M
for destination_hash in self.held_announces:
announce_packet = self.held_announces[destination_hash]
if announce_packet.hops < min_hops:
min_hops = announce_packet.hops
selected_announce_packet = announce_packet
if selected_announce_packet != None:
RNS.log("Releasing held announce packet "+str(selected_announce_packet)+" from "+str(self), RNS.LOG_EXTREME)
self.ic_held_release = time.time() + self.ic_held_release_interval
self.held_announces.pop(selected_announce_packet.destination_hash)
def release():
RNS.Transport.inbound(selected_announce_packet.raw, selected_announce_packet.receiving_interface)
threading.Thread(target=release, daemon=True).start()
except Exception as e:
RNS.log("An error occurred while processing held announces for "+str(self), RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
def received_announce(self):
self.ia_freq_deque.append(time.time())
if hasattr(self, "parent_interface") and self.parent_interface != None:
self.parent_interface.received_announce(from_spawned=True)
def sent_announce(self):
self.oa_freq_deque.append(time.time())
if hasattr(self, "parent_interface") and self.parent_interface != None:
self.parent_interface.sent_announce(from_spawned=True)
def incoming_announce_frequency(self):
if not len(self.ia_freq_deque) > 1:
return 0
else:
dq_len = len(self.ia_freq_deque)
delta_sum = 0
for i in range(1,dq_len):
delta_sum += self.ia_freq_deque[i]-self.ia_freq_deque[i-1]
delta_sum += time.time() - self.ia_freq_deque[dq_len-1]
if delta_sum == 0:
avg = 0
else:
avg = 1/(delta_sum/(dq_len))
return avg
def outgoing_announce_frequency(self):
if not len(self.oa_freq_deque) > 1:
return 0
else:
dq_len = len(self.oa_freq_deque)
delta_sum = 0
for i in range(1,dq_len):
delta_sum += self.oa_freq_deque[i]-self.oa_freq_deque[i-1]
delta_sum += time.time() - self.oa_freq_deque[dq_len-1]
if delta_sum == 0:
avg = 0
else:
avg = 1/(delta_sum/(dq_len))
return avg
def process_announce_queue(self):
if not hasattr(self, "announce_cap"):
self.announce_cap = RNS.Reticulum.ANNOUNCE_CAP
@ -73,6 +223,7 @@ class Interface:
self.announce_allowed_at = now + wait_time
self.processOutgoing(selected["raw"])
self.sent_announce()
if selected in self.announce_queue:
self.announce_queue.remove(selected)

View File

@ -70,8 +70,9 @@ class KISSInterface(Interface):
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
RNS.panic()
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 564
if beacon_data == None:
beacon_data = ""
@ -142,7 +143,7 @@ class KISSInterface(Interface):
# Allow time for interface to initialise before config
sleep(2.0)
thread = threading.Thread(target=self.readLoop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.online = True
RNS.log("Serial port "+self.port+" is now open")
@ -279,7 +280,7 @@ class KISSInterface(Interface):
in_frame = True
command = KISS.CMD_UNKNOWN
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (len(data_buffer) == 0 and command == KISS.CMD_UNKNOWN):
# We only support one HDLC port for now, so
# strip off the port nibble
@ -347,5 +348,8 @@ class KISSInterface(Interface):
RNS.log("Reconnected serial port for "+str(self))
def should_ingress_limit(self):
return False
def __str__(self):
return "KISSInterface["+self.name+"]"

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -28,6 +28,7 @@ import time
import sys
import os
import RNS
from threading import Lock
class HDLC():
FLAG = 0x7E
@ -41,14 +42,25 @@ class HDLC():
return data
class ThreadingTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def server_bind(self):
if RNS.vendor.platformutils.is_windows():
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)
else:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
class LocalClientInterface(Interface):
RECONNECT_WAIT = 3
RECONNECT_WAIT = 8
def __init__(self, owner, name, target_port = None, connected_socket=None):
self.rxb = 0
self.txb = 0
super().__init__()
# TODO: Remove at some point
# self.rxptime = 0
self.HW_MTU = 1064
self.online = False
self.IN = True
@ -66,6 +78,7 @@ class LocalClientInterface(Interface):
self.target_ip = None
self.target_port = None
self.socket = connected_socket
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
self.is_connected_to_shared_instance = False
@ -80,17 +93,23 @@ class LocalClientInterface(Interface):
self.online = True
self.writing = False
self._force_bitrate = False
self.announce_rate_target = None
self.announce_rate_grace = None
self.announce_rate_penalty = None
if connected_socket == None:
thread = threading.Thread(target=self.read_loop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
def should_ingress_limit(self):
return False
def connect(self):
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
self.socket.connect((self.target_ip, self.target_port))
self.online = True
@ -117,13 +136,16 @@ class LocalClientInterface(Interface):
RNS.log("Connection attempt for "+str(self)+" failed: "+str(e), RNS.LOG_DEBUG)
if not self.never_connected:
RNS.log("Reconnected TCP socket for "+str(self)+".", RNS.LOG_INFO)
RNS.log("Reconnected socket for "+str(self)+".", RNS.LOG_INFO)
self.reconnecting = False
thread = threading.Thread(target=self.read_loop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
RNS.Transport.shared_connection_reappeared()
def job():
time.sleep(LocalClientInterface.RECONNECT_WAIT+2)
RNS.Transport.shared_connection_reappeared()
threading.Thread(target=job, daemon=True).start()
else:
RNS.log("Attempt to reconnect on a non-initiator shared local interface. This should not happen.", RNS.LOG_ERROR)
@ -134,17 +156,30 @@ class LocalClientInterface(Interface):
self.rxb += len(data)
if hasattr(self, "parent_interface") and self.parent_interface != None:
self.parent_interface.rxb += len(data)
# TODO: Remove at some point
# processing_start = time.time()
self.owner.inbound(data, self)
# TODO: Remove at some point
# duration = time.time() - processing_start
# self.rxptime += duration
def processOutgoing(self, data):
if self.online:
while self.writing:
time.sleep(0.01)
try:
self.writing = True
if self._force_bitrate:
if not hasattr(self, "send_lock"):
self.send_lock = Lock()
with self.send_lock:
s = len(data) / self.bitrate * 8
RNS.log(f"Simulating latency of {RNS.prettytime(s)} for {len(data)} bytes")
time.sleep(s)
data = bytes([HDLC.FLAG])+HDLC.escape(data)+bytes([HDLC.FLAG])
self.socket.sendall(data)
self.writing = False
@ -177,7 +212,7 @@ class LocalClientInterface(Interface):
elif (byte == HDLC.FLAG):
in_frame = True
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (byte == HDLC.ESC):
escape = True
else:
@ -237,6 +272,8 @@ class LocalClientInterface(Interface):
RNS.Transport.local_client_interfaces.remove(self)
if hasattr(self, "parent_interface") and self.parent_interface != None:
self.parent_interface.clients -= 1
if hasattr(RNS.Transport, "owner") and RNS.Transport.owner != None:
RNS.Transport.owner._should_persist_data()
if nowarning == False:
RNS.log("The interface "+str(self)+" experienced an unrecoverable error and is being torn down. Restart Reticulum to attempt to open this interface again.", RNS.LOG_ERROR)
@ -257,8 +294,7 @@ class LocalClientInterface(Interface):
class LocalServerInterface(Interface):
def __init__(self, owner, bindport=None):
self.rxb = 0
self.txb = 0
super().__init__()
self.online = False
self.clients = 0
@ -282,11 +318,10 @@ class LocalServerInterface(Interface):
address = (self.bind_ip, self.bind_port)
ThreadingTCPServer.allow_reuse_address = True
self.server = ThreadingTCPServer(address, handlerFactory(self.incoming_connection))
thread = threading.Thread(target=self.server.serve_forever)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.announce_rate_target = None
@ -307,7 +342,9 @@ class LocalServerInterface(Interface):
spawned_interface.target_port = str(handler.client_address[1])
spawned_interface.parent_interface = self
spawned_interface.bitrate = self.bitrate
RNS.log("Accepting new connection to shared instance: "+str(spawned_interface), RNS.LOG_EXTREME)
if hasattr(self, "_force_bitrate"):
spawned_interface._force_bitrate = self._force_bitrate
# RNS.log("Accepting new connection to shared instance: "+str(spawned_interface), RNS.LOG_EXTREME)
RNS.Transport.interfaces.append(spawned_interface)
RNS.Transport.local_client_interfaces.append(spawned_interface)
self.clients += 1
@ -316,6 +353,12 @@ class LocalServerInterface(Interface):
def processOutgoing(self, data):
pass
def received_announce(self, from_spawned=False):
if from_spawned: self.ia_freq_deque.append(time.time())
def sent_announce(self, from_spawned=False):
if from_spawned: self.oa_freq_deque.append(time.time())
def __str__(self):
return "Shared Instance["+str(self.bind_port)+"]"

View File

@ -54,8 +54,9 @@ class PipeInterface(Interface):
if respawn_delay == None:
respawn_delay = 5
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 1064
self.owner = owner
self.name = name
@ -94,7 +95,7 @@ class PipeInterface(Interface):
def configure_pipe(self):
sleep(0.01)
thread = threading.Thread(target=self.readLoop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.online = True
RNS.log("Subprocess pipe for "+str(self)+" is now connected", RNS.LOG_VERBOSE)
@ -137,7 +138,7 @@ class PipeInterface(Interface):
elif (byte == HDLC.FLAG):
in_frame = True
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (byte == HDLC.ESC):
escape = True
else:

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -43,14 +43,23 @@ class KISS():
CMD_CR = 0x05
CMD_RADIO_STATE = 0x06
CMD_RADIO_LOCK = 0x07
CMD_ST_ALOCK = 0x0B
CMD_LT_ALOCK = 0x0C
CMD_DETECT = 0x08
CMD_LEAVE = 0x0A
CMD_READY = 0x0F
CMD_STAT_RX = 0x21
CMD_STAT_TX = 0x22
CMD_STAT_RSSI = 0x23
CMD_STAT_SNR = 0x24
CMD_STAT_CHTM = 0x25
CMD_STAT_PHYPRM = 0x26
CMD_BLINK = 0x30
CMD_RANDOM = 0x40
CMD_FB_EXT = 0x41
CMD_FB_READ = 0x42
CMD_FB_WRITE = 0x43
CMD_BT_CTRL = 0x46
CMD_PLATFORM = 0x48
CMD_MCU = 0x49
CMD_FW_VERSION = 0x50
@ -82,25 +91,26 @@ class KISS():
class RNodeInterface(Interface):
MAX_CHUNK = 32768
owner = None
port = None
speed = None
databits = None
parity = None
stopbits = None
serial = None
FREQ_MIN = 137000000
FREQ_MAX = 1020000000
FREQ_MAX = 3000000000
RSSI_OFFSET = 157
CALLSIGN_MAX_LEN = 32
REQUIRED_FW_VER_MAJ = 1
REQUIRED_FW_VER_MIN = 26
REQUIRED_FW_VER_MIN = 52
RECONNECT_WAIT = 5
Q_SNR_MIN_BASE = -9
Q_SNR_MAX = 6
Q_SNR_STEP = 2
def __init__(self, owner, name, port, frequency = None, bandwidth = None, txpower = None, sf = None, cr = None, flow_control = False, id_interval = None, id_callsign = None, st_alock = None, lt_alock = None):
if RNS.vendor.platformutils.is_android():
raise SystemError("Invalid interface type. The Android-specific RNode interface must be used on Android")
def __init__(self, owner, name, port, frequency = None, bandwidth = None, txpower = None, sf = None, cr = None, flow_control = False, id_interval = None, id_callsign = None):
import importlib
if importlib.util.find_spec('serial') != None:
import serial
@ -109,8 +119,9 @@ class RNodeInterface(Interface):
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
RNS.panic()
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 508
self.pyserial = serial
self.serial = None
@ -119,10 +130,11 @@ class RNodeInterface(Interface):
self.port = port
self.speed = 115200
self.databits = 8
self.parity = serial.PARITY_NONE
self.stopbits = 1
self.timeout = 100
self.online = False
self.detached = False
self.reconnecting= False
self.frequency = frequency
self.bandwidth = bandwidth
@ -131,7 +143,10 @@ class RNodeInterface(Interface):
self.cr = cr
self.state = KISS.RADIO_STATE_OFF
self.bitrate = 0
self.st_alock = st_alock
self.lt_alock = lt_alock
self.platform = None
self.display = None
self.mcu = None
self.detected = False
self.firmware_ok = False
@ -140,6 +155,7 @@ class RNodeInterface(Interface):
self.last_id = 0
self.first_tx = None
self.reconnect_w = RNodeInterface.RECONNECT_WAIT
self.r_frequency = None
self.r_bandwidth = None
@ -151,26 +167,38 @@ class RNodeInterface(Interface):
self.r_stat_rx = None
self.r_stat_tx = None
self.r_stat_rssi = None
self.r_stat_snr = None
self.r_st_alock = None
self.r_lt_alock = None
self.r_random = None
self.r_airtime_short = 0.0
self.r_airtime_long = 0.0
self.r_channel_load_short = 0.0
self.r_channel_load_long = 0.0
self.r_symbol_time_ms = None
self.r_symbol_rate = None
self.r_preamble_symbols = None
self.r_premable_time_ms = None
self.packet_queue = []
self.flow_control = flow_control
self.interface_ready = False
self.announce_rate_target = None
self.validcfg = True
if (self.frequency < RNodeInterface.FREQ_MIN or self.frequency > RNodeInterface.FREQ_MAX):
RNS.log("Invalid frequency configured for "+str(self), RNS.LOG_ERROR)
self.validcfg = False
if (self.txpower < 0 or self.txpower > 17):
if (self.txpower < 0 or self.txpower > 22):
RNS.log("Invalid TX power configured for "+str(self), RNS.LOG_ERROR)
self.validcfg = False
if (self.bandwidth < 7800 or self.bandwidth > 500000):
if (self.bandwidth < 7800 or self.bandwidth > 1625000):
RNS.log("Invalid bandwidth configured for "+str(self), RNS.LOG_ERROR)
self.validcfg = False
if (self.sf < 7 or self.sf > 12):
if (self.sf < 5 or self.sf > 12):
RNS.log("Invalid spreading factor configured for "+str(self), RNS.LOG_ERROR)
self.validcfg = False
@ -178,6 +206,14 @@ class RNodeInterface(Interface):
RNS.log("Invalid coding rate configured for "+str(self), RNS.LOG_ERROR)
self.validcfg = False
if (self.st_alock and (self.st_alock < 0.0 or self.st_alock > 100.0)):
RNS.log("Invalid short-term airtime limit configured for "+str(self), RNS.LOG_ERROR)
self.validcfg = False
if (self.lt_alock and (self.lt_alock < 0.0 or self.lt_alock > 100.0)):
RNS.log("Invalid long-term airtime limit configured for "+str(self), RNS.LOG_ERROR)
self.validcfg = False
if id_interval != None and id_callsign != None:
if (len(id_callsign.encode("utf-8")) <= RNodeInterface.CALLSIGN_MAX_LEN):
self.should_id = True
@ -195,14 +231,21 @@ class RNodeInterface(Interface):
try:
self.open_port()
if self.serial.is_open:
self.configure_device()
else:
raise IOError("Could not open serial port")
except Exception as e:
RNS.log("Could not open serial port for interface "+str(self), RNS.LOG_ERROR)
raise e
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.log("Reticulum will attempt to bring up this interface periodically", RNS.LOG_ERROR)
if not self.detached and not self.reconnecting:
thread = threading.Thread(target=self.reconnect_port)
thread.daemon = True
thread.start()
if self.serial.is_open:
self.configure_device()
else:
raise IOError("Could not open serial port")
def open_port(self):
RNS.log("Opening serial port "+self.port+"...")
@ -210,7 +253,7 @@ class RNodeInterface(Interface):
port = self.port,
baudrate = self.speed,
bytesize = self.databits,
parity = self.parity,
parity = self.pyserial.PARITY_NONE,
stopbits = self.stopbits,
xonxoff = False,
rtscts = False,
@ -222,35 +265,42 @@ class RNodeInterface(Interface):
def configure_device(self):
self.r_frequency = None
self.r_bandwidth = None
self.r_txpower = None
self.r_sf = None
self.r_cr = None
self.r_state = None
self.r_lock = None
sleep(2.0)
thread = threading.Thread(target=self.readLoop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.detect()
sleep(0.1)
sleep(0.2)
if not self.detected:
raise IOError("Could not detect device")
RNS.log("Could not detect device for "+str(self), RNS.LOG_ERROR)
self.serial.close()
else:
if self.platform == KISS.PLATFORM_ESP32:
RNS.log("Resetting ESP32-based device before configuration...", RNS.LOG_VERBOSE)
self.hard_reset()
self.display = True
self.online = True
RNS.log("Serial port "+self.port+" is now open")
RNS.log("Configuring RNode interface...", RNS.LOG_VERBOSE)
self.initRadio()
if (self.validateRadioState()):
self.interface_ready = True
RNS.log(str(self)+" is configured and powered up")
sleep(1.0)
sleep(0.3)
self.online = True
else:
RNS.log("After configuring "+str(self)+", the reported radio parameters did not match your configuration.", RNS.LOG_ERROR)
RNS.log("Make sure that your hardware actually supports the parameters specified in the configuration", RNS.LOG_ERROR)
RNS.log("Aborting RNode startup", RNS.LOG_ERROR)
self.serial.close()
raise IOError("RNode interface did not pass configuration validation")
def initRadio(self):
@ -259,13 +309,59 @@ class RNodeInterface(Interface):
self.setTXPower()
self.setSpreadingFactor()
self.setCodingRate()
self.setSTALock()
self.setLTALock()
self.setRadioState(KISS.RADIO_STATE_ON)
def detect(self):
kiss_command = bytes([KISS.FEND, KISS.CMD_DETECT, KISS.DETECT_REQ, KISS.FEND, KISS.CMD_FW_VERSION, 0x00, KISS.FEND, KISS.CMD_PLATFORM, 0x00, KISS.FEND, KISS.CMD_MCU, 0x00, KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while detecting hardware for "+self(str))
raise IOError("An IO error occurred while detecting hardware for "+str(self))
def leave(self):
kiss_command = bytes([KISS.FEND, KISS.CMD_LEAVE, 0xFF, KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while sending host left command to device")
def enable_external_framebuffer(self):
if self.display != None:
kiss_command = bytes([KISS.FEND, KISS.CMD_FB_EXT, 0x01, KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while enabling external framebuffer on device")
def disable_external_framebuffer(self):
if self.display != None:
kiss_command = bytes([KISS.FEND, KISS.CMD_FB_EXT, 0x00, KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while disabling external framebuffer on device")
FB_PIXEL_WIDTH = 64
FB_BITS_PER_PIXEL = 1
FB_PIXELS_PER_BYTE = 8//FB_BITS_PER_PIXEL
FB_BYTES_PER_LINE = FB_PIXEL_WIDTH//FB_PIXELS_PER_BYTE
def display_image(self, imagedata):
if self.display != None:
lines = len(imagedata)//8
for line in range(lines):
line_start = line*RNodeInterface.FB_BYTES_PER_LINE
line_end = line_start+RNodeInterface.FB_BYTES_PER_LINE
line_data = bytes(imagedata[line_start:line_end])
self.write_framebuffer(line, line_data)
def write_framebuffer(self, line, line_data):
if self.display != None:
line_byte = line.to_bytes(1, byteorder="big", signed=False)
data = line_byte+line_data
escaped_data = KISS.escape(data)
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_FB_WRITE])+escaped_data+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while writing framebuffer data device")
def hard_reset(self):
kiss_command = bytes([KISS.FEND, KISS.CMD_RESET, 0xf8, KISS.FEND])
@ -284,7 +380,7 @@ class RNodeInterface(Interface):
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_FREQUENCY])+data+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring frequency for "+self(str))
raise IOError("An IO error occurred while configuring frequency for "+str(self))
def setBandwidth(self):
c1 = self.bandwidth >> 24
@ -296,35 +392,59 @@ class RNodeInterface(Interface):
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_BANDWIDTH])+data+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring bandwidth for "+self(str))
raise IOError("An IO error occurred while configuring bandwidth for "+str(self))
def setTXPower(self):
txp = bytes([self.txpower])
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_TXPOWER])+txp+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring TX power for "+self(str))
raise IOError("An IO error occurred while configuring TX power for "+str(self))
def setSpreadingFactor(self):
sf = bytes([self.sf])
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_SF])+sf+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring spreading factor for "+self(str))
raise IOError("An IO error occurred while configuring spreading factor for "+str(self))
def setCodingRate(self):
cr = bytes([self.cr])
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_CR])+cr+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring coding rate for "+self(str))
raise IOError("An IO error occurred while configuring coding rate for "+str(self))
def setSTALock(self):
if self.st_alock != None:
at = int(self.st_alock*100)
c1 = at >> 8 & 0xFF
c2 = at & 0xFF
data = KISS.escape(bytes([c1])+bytes([c2]))
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_ST_ALOCK])+data+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring short-term airtime limit for "+str(self))
def setLTALock(self):
if self.lt_alock != None:
at = int(self.lt_alock*100)
c1 = at >> 8 & 0xFF
c2 = at & 0xFF
data = KISS.escape(bytes([c1])+bytes([c2]))
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_LT_ALOCK])+data+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring long-term airtime limit for "+str(self))
def setRadioState(self, state):
self.state = state
kiss_command = bytes([KISS.FEND])+bytes([KISS.CMD_RADIO_STATE])+bytes([state])+bytes([KISS.FEND])
written = self.serial.write(kiss_command)
if written != len(kiss_command):
raise IOError("An IO error occurred while configuring radio state for "+self(str))
raise IOError("An IO error occurred while configuring radio state for "+str(self))
def validate_firmware(self):
if (self.maj_version >= RNodeInterface.REQUIRED_FW_VER_MAJ):
@ -336,14 +456,16 @@ class RNodeInterface(Interface):
RNS.log("The firmware version of the connected RNode is "+str(self.maj_version)+"."+str(self.min_version), RNS.LOG_ERROR)
RNS.log("This version of Reticulum requires at least version "+str(RNodeInterface.REQUIRED_FW_VER_MAJ)+"."+str(RNodeInterface.REQUIRED_FW_VER_MIN), RNS.LOG_ERROR)
RNS.log("Please update your RNode firmware with rnodeconf (https://github.com/markqvist/rnodeconfigutil/)")
RNS.log("Please update your RNode firmware with rnodeconf from https://github.com/markqvist/rnodeconfigutil/")
RNS.panic()
def validateRadioState(self):
RNS.log("Wating for radio configuration validation for "+str(self)+"...", RNS.LOG_VERBOSE)
RNS.log("Waiting for radio configuration validation for "+str(self)+"...", RNS.LOG_VERBOSE)
sleep(0.25);
if (self.frequency != self.r_frequency):
self.validcfg = True
if (self.r_frequency != None and abs(self.frequency - int(self.r_frequency)) > 100):
RNS.log("Frequency mismatch", RNS.LOG_ERROR)
self.validcfg = False
if (self.bandwidth != self.r_bandwidth):
@ -374,7 +496,7 @@ class RNodeInterface(Interface):
self.bitrate = 0
def processIncoming(self, data):
self.rxb += len(data)
self.rxb += len(data)
self.owner.inbound(data, self)
self.r_stat_rssi = None
self.r_stat_snr = None
@ -439,7 +561,7 @@ class RNodeInterface(Interface):
command = KISS.CMD_UNKNOWN
data_buffer = b""
command_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (len(data_buffer) == 0 and command == KISS.CMD_UNKNOWN):
command = byte
elif (command == KISS.CMD_DATA):
@ -554,6 +676,96 @@ class RNodeInterface(Interface):
self.r_stat_rssi = byte-RNodeInterface.RSSI_OFFSET
elif (command == KISS.CMD_STAT_SNR):
self.r_stat_snr = int.from_bytes(bytes([byte]), byteorder="big", signed=True) * 0.25
try:
sfs = self.r_sf-7
snr = self.r_stat_snr
q_snr_min = RNodeInterface.Q_SNR_MIN_BASE-sfs*RNodeInterface.Q_SNR_STEP
q_snr_max = RNodeInterface.Q_SNR_MAX
q_snr_span = q_snr_max-q_snr_min
quality = round(((snr-q_snr_min)/(q_snr_span))*100,1)
if quality > 100.0: quality = 100.0
if quality < 0.0: quality = 0.0
self.r_stat_q = quality
except:
pass
elif (command == KISS.CMD_ST_ALOCK):
if (byte == KISS.FESC):
escape = True
else:
if (escape):
if (byte == KISS.TFEND):
byte = KISS.FEND
if (byte == KISS.TFESC):
byte = KISS.FESC
escape = False
command_buffer = command_buffer+bytes([byte])
if (len(command_buffer) == 2):
at = command_buffer[0] << 8 | command_buffer[1]
self.r_st_alock = at/100.0
RNS.log(str(self)+" Radio reporting short-term airtime limit is "+str(self.r_st_alock)+"%", RNS.LOG_DEBUG)
elif (command == KISS.CMD_LT_ALOCK):
if (byte == KISS.FESC):
escape = True
else:
if (escape):
if (byte == KISS.TFEND):
byte = KISS.FEND
if (byte == KISS.TFESC):
byte = KISS.FESC
escape = False
command_buffer = command_buffer+bytes([byte])
if (len(command_buffer) == 2):
at = command_buffer[0] << 8 | command_buffer[1]
self.r_lt_alock = at/100.0
RNS.log(str(self)+" Radio reporting long-term airtime limit is "+str(self.r_lt_alock)+"%", RNS.LOG_DEBUG)
elif (command == KISS.CMD_STAT_CHTM):
if (byte == KISS.FESC):
escape = True
else:
if (escape):
if (byte == KISS.TFEND):
byte = KISS.FEND
if (byte == KISS.TFESC):
byte = KISS.FESC
escape = False
command_buffer = command_buffer+bytes([byte])
if (len(command_buffer) == 8):
ats = command_buffer[0] << 8 | command_buffer[1]
atl = command_buffer[2] << 8 | command_buffer[3]
cus = command_buffer[4] << 8 | command_buffer[5]
cul = command_buffer[6] << 8 | command_buffer[7]
self.r_airtime_short = ats/100.0
self.r_airtime_long = atl/100.0
self.r_channel_load_short = cus/100.0
self.r_channel_load_long = cul/100.0
elif (command == KISS.CMD_STAT_PHYPRM):
if (byte == KISS.FESC):
escape = True
else:
if (escape):
if (byte == KISS.TFEND):
byte = KISS.FEND
if (byte == KISS.TFESC):
byte = KISS.FESC
escape = False
command_buffer = command_buffer+bytes([byte])
if (len(command_buffer) == 10):
lst = (command_buffer[0] << 8 | command_buffer[1])/1000.0
lsr = command_buffer[2] << 8 | command_buffer[3]
prs = command_buffer[4] << 8 | command_buffer[5]
prt = command_buffer[6] << 8 | command_buffer[7]
cst = command_buffer[8] << 8 | command_buffer[9]
if lst != self.r_symbol_time_ms or lsr != self.r_symbol_rate or prs != self.r_preamble_symbols or prt != self.r_premable_time_ms or cst != self.r_csma_slot_time_ms:
self.r_symbol_time_ms = lst
self.r_symbol_rate = lsr
self.r_preamble_symbols = prs
self.r_premable_time_ms = prt
self.r_csma_slot_time_ms = cst
RNS.log(str(self)+" Radio reporting symbol time is "+str(round(self.r_symbol_time_ms,2))+"ms (at "+str(self.r_symbol_rate)+" baud)", RNS.LOG_DEBUG)
RNS.log(str(self)+" Radio reporting preamble is "+str(self.r_preamble_symbols)+" symbols ("+str(self.r_premable_time_ms)+"ms)", RNS.LOG_DEBUG)
RNS.log(str(self)+" Radio reporting CSMA slot time is "+str(self.r_csma_slot_time_ms)+"ms", RNS.LOG_DEBUG)
elif (command == KISS.CMD_RANDOM):
self.r_random = byte
elif (command == KISS.CMD_PLATFORM):
@ -564,7 +776,7 @@ class RNodeInterface(Interface):
if (byte == KISS.ERROR_INITRADIO):
RNS.log(str(self)+" hardware initialisation error (code "+RNS.hexrep(byte)+")", RNS.LOG_ERROR)
raise IOError("Radio initialisation failure")
elif (byte == KISS.ERROR_INITRADIO):
elif (byte == KISS.ERROR_TXFAILED):
RNS.log(str(self)+" hardware TX error (code "+RNS.hexrep(byte)+")", RNS.LOG_ERROR)
raise IOError("Hardware transmit failure")
else:
@ -587,7 +799,7 @@ class RNodeInterface(Interface):
else:
time_since_last = int(time.time()*1000) - last_read_ms
if len(data_buffer) > 0 and time_since_last > self.timeout:
RNS.log(str(self)+" serial read timeout", RNS.LOG_DEBUG)
RNS.log(str(self)+" serial read timeout in command "+str(command), RNS.LOG_WARNING)
data_buffer = b""
in_frame = False
command = KISS.CMD_UNKNOWN
@ -612,13 +824,19 @@ class RNodeInterface(Interface):
RNS.log("Reticulum will attempt to reconnect the interface periodically.", RNS.LOG_ERROR)
self.online = False
self.serial.close()
self.reconnect_port()
try:
self.serial.close()
except Exception as e:
pass
if not self.detached and not self.reconnecting:
self.reconnect_port()
def reconnect_port(self):
while not self.online:
self.reconnecting = True
while not self.online and not self.detached:
try:
time.sleep(3.5)
time.sleep(5)
RNS.log("Attempting to reconnect serial port "+str(self.port)+" for "+str(self)+"...", RNS.LOG_VERBOSE)
self.open_port()
if self.serial.is_open:
@ -626,8 +844,18 @@ class RNodeInterface(Interface):
except Exception as e:
RNS.log("Error while reconnecting port, the contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.log("Reconnected serial port for "+str(self))
self.reconnecting = False
if self.online:
RNS.log("Reconnected serial port for "+str(self))
def detach(self):
self.detached = True
self.disable_external_framebuffer()
self.setRadioState(KISS.RADIO_STATE_OFF)
self.leave()
def should_ingress_limit(self):
return False
def __str__(self):
return "RNodeInterface["+str(self.name)+"]"

View File

@ -60,8 +60,9 @@ class SerialInterface(Interface):
RNS.log("You can install one with the command: python3 -m pip install pyserial", RNS.LOG_CRITICAL)
RNS.panic()
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 564
self.pyserial = serial
self.serial = None
@ -114,7 +115,7 @@ class SerialInterface(Interface):
def configure_device(self):
sleep(0.5)
thread = threading.Thread(target=self.readLoop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.online = True
RNS.log("Serial port "+self.port+" is now open", RNS.LOG_VERBOSE)
@ -152,7 +153,7 @@ class SerialInterface(Interface):
elif (byte == HDLC.FLAG):
in_frame = True
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (byte == HDLC.ESC):
escape = True
else:
@ -199,5 +200,8 @@ class SerialInterface(Interface):
RNS.log("Reconnected serial port for "+str(self))
def should_ingress_limit(self):
return False
def __str__(self):
return "SerialInterface["+self.name+"]"

View File

@ -65,19 +65,23 @@ class TCPClientInterface(Interface):
RECONNECT_MAX_TRIES = None
# TCP socket options
TCP_USER_TIMEOUT = 20
TCP_USER_TIMEOUT = 24
TCP_PROBE_AFTER = 5
TCP_PROBE_INTERVAL = 3
TCP_PROBES = 5
TCP_PROBE_INTERVAL = 2
TCP_PROBES = 12
I2P_USER_TIMEOUT = 40
INITIAL_CONNECT_TIMEOUT = 5
SYNCHRONOUS_START = True
I2P_USER_TIMEOUT = 45
I2P_PROBE_AFTER = 10
I2P_PROBE_INTERVAL = 5
I2P_PROBES = 6
I2P_PROBE_INTERVAL = 9
I2P_PROBES = 5
def __init__(self, owner, name, target_ip=None, target_port=None, connected_socket=None, max_reconnect_tries=None, kiss_framing=False, i2p_tunneled = False):
self.rxb = 0
self.txb = 0
def __init__(self, owner, name, target_ip=None, target_port=None, connected_socket=None, max_reconnect_tries=None, kiss_framing=False, i2p_tunneled = False, connect_timeout = None):
super().__init__()
self.HW_MTU = 1064
self.IN = True
self.OUT = False
@ -112,23 +116,37 @@ class TCPClientInterface(Interface):
elif platform.system() == "Darwin":
self.set_timeouts_osx()
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
elif target_ip != None and target_port != None:
self.receives = True
self.target_ip = target_ip
self.target_port = target_port
self.initiator = True
if not self.connect(initial=True):
thread = threading.Thread(target=self.reconnect)
thread.setDaemon(True)
thread.start()
else:
thread = threading.Thread(target=self.read_loop)
thread.setDaemon(True)
thread.start()
if not self.kiss_framing:
self.wants_tunnel = True
if connect_timeout != None:
self.connect_timeout = connect_timeout
else:
self.connect_timeout = TCPClientInterface.INITIAL_CONNECT_TIMEOUT
if TCPClientInterface.SYNCHRONOUS_START:
self.initial_connect()
else:
thread = threading.Thread(target=self.initial_connect)
thread.daemon = True
thread.start()
def initial_connect(self):
if not self.connect(initial=True):
thread = threading.Thread(target=self.reconnect)
thread.daemon = True
thread.start()
else:
thread = threading.Thread(target=self.read_loop)
thread.daemon = True
thread.start()
if not self.kiss_framing:
self.wants_tunnel = True
def set_timeouts_linux(self):
if not self.i2p_tunneled:
@ -137,6 +155,7 @@ class TCPClientInterface(Interface):
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, int(TCPClientInterface.TCP_PROBE_AFTER))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, int(TCPClientInterface.TCP_PROBE_INTERVAL))
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, int(TCPClientInterface.TCP_PROBES))
else:
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, int(TCPClientInterface.I2P_USER_TIMEOUT * 1000))
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
@ -178,9 +197,18 @@ class TCPClientInterface(Interface):
def connect(self, initial=False):
try:
if initial:
RNS.log("Establishing TCP connection for "+str(self)+"...", RNS.LOG_DEBUG)
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.settimeout(TCPClientInterface.INITIAL_CONNECT_TIMEOUT)
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
self.socket.connect((self.target_ip, self.target_port))
self.socket.settimeout(None)
self.online = True
if initial:
RNS.log("TCP connection for "+str(self)+" established", RNS.LOG_DEBUG)
except Exception as e:
if initial:
@ -224,11 +252,11 @@ class TCPClientInterface(Interface):
RNS.log("Connection attempt for "+str(self)+" failed: "+str(e), RNS.LOG_DEBUG)
if not self.never_connected:
RNS.log("Reconnected TCP socket for "+str(self)+".", RNS.LOG_INFO)
RNS.log("Reconnected socket for "+str(self)+".", RNS.LOG_INFO)
self.reconnecting = False
thread = threading.Thread(target=self.read_loop)
thread.setDaemon(True)
thread.daemon = True
thread.start()
if not self.kiss_framing:
RNS.Transport.synthesize_tunnel(self)
@ -246,8 +274,8 @@ class TCPClientInterface(Interface):
def processOutgoing(self, data):
if self.online:
while self.writing:
time.sleep(0.01)
# while self.writing:
# time.sleep(0.01)
try:
self.writing = True
@ -293,7 +321,7 @@ class TCPClientInterface(Interface):
in_frame = True
command = KISS.CMD_UNKNOWN
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (len(data_buffer) == 0 and command == KISS.CMD_UNKNOWN):
# We only support one HDLC port for now, so
# strip off the port nibble
@ -319,7 +347,7 @@ class TCPClientInterface(Interface):
elif (byte == HDLC.FLAG):
in_frame = True
data_buffer = b""
elif (in_frame and len(data_buffer) < RNS.Reticulum.MTU):
elif (in_frame and len(data_buffer) < self.HW_MTU):
if (byte == HDLC.ESC):
escape = True
else:
@ -333,10 +361,10 @@ class TCPClientInterface(Interface):
else:
self.online = False
if self.initiator and not self.detached:
RNS.log("TCP socket for "+str(self)+" was closed, attempting to reconnect...", RNS.LOG_WARNING)
RNS.log("The socket for "+str(self)+" was closed, attempting to reconnect...", RNS.LOG_WARNING)
self.reconnect()
else:
RNS.log("TCP socket for remote client "+str(self)+" was closed.", RNS.LOG_VERBOSE)
RNS.log("The socket for remote client "+str(self)+" was closed.", RNS.LOG_VERBOSE)
self.teardown()
break
@ -382,35 +410,28 @@ class TCPServerInterface(Interface):
@staticmethod
def get_address_for_if(name):
import importlib
if importlib.util.find_spec('netifaces') != None:
import netifaces
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['addr']
else:
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
RNS.panic()
import RNS.vendor.ifaddr.niwrapper as netinfo
ifaddr = netinfo.ifaddresses(name)
return ifaddr[netinfo.AF_INET][0]["addr"]
@staticmethod
def get_broadcast_for_if(name):
import importlib
if importlib.util.find_spec('netifaces') != None:
import netifaces
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['broadcast']
else:
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
RNS.panic()
import RNS.vendor.ifaddr.niwrapper as netinfo
ifaddr = netinfo.ifaddresses(name)
return ifaddr[netinfo.AF_INET][0]["broadcast"]
def __init__(self, owner, name, device=None, bindip=None, bindport=None, i2p_tunneled=False):
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 1064
self.online = False
self.clients = 0
self.IN = True
self.OUT = False
self.name = name
self.detached = False
self.i2p_tunneled = i2p_tunneled
self.mode = RNS.Interfaces.Interface.Interface.MODE_FULL
@ -437,7 +458,7 @@ class TCPServerInterface(Interface):
self.bitrate = TCPServerInterface.BITRATE_GUESS
thread = threading.Thread(target=self.server.serve_forever)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.online = True
@ -453,24 +474,66 @@ class TCPServerInterface(Interface):
spawned_interface.target_port = str(handler.client_address[1])
spawned_interface.parent_interface = self
spawned_interface.bitrate = self.bitrate
spawned_interface.ifac_size = self.ifac_size
spawned_interface.ifac_netname = self.ifac_netname
spawned_interface.ifac_netkey = self.ifac_netkey
if spawned_interface.ifac_netname != None or spawned_interface.ifac_netkey != None:
ifac_origin = b""
if spawned_interface.ifac_netname != None:
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netname.encode("utf-8"))
if spawned_interface.ifac_netkey != None:
ifac_origin += RNS.Identity.full_hash(spawned_interface.ifac_netkey.encode("utf-8"))
ifac_origin_hash = RNS.Identity.full_hash(ifac_origin)
spawned_interface.ifac_key = RNS.Cryptography.hkdf(
length=64,
derive_from=ifac_origin_hash,
salt=RNS.Reticulum.IFAC_SALT,
context=None
)
spawned_interface.ifac_identity = RNS.Identity.from_bytes(spawned_interface.ifac_key)
spawned_interface.ifac_signature = spawned_interface.ifac_identity.sign(RNS.Identity.full_hash(spawned_interface.ifac_key))
spawned_interface.announce_rate_target = self.announce_rate_target
spawned_interface.announce_rate_grace = self.announce_rate_grace
spawned_interface.announce_rate_penalty = self.announce_rate_penalty
spawned_interface.mode = self.mode
spawned_interface.HW_MTU = self.HW_MTU
spawned_interface.online = True
RNS.log("Spawned new TCPClient Interface: "+str(spawned_interface), RNS.LOG_VERBOSE)
RNS.Transport.interfaces.append(spawned_interface)
self.clients += 1
spawned_interface.read_loop()
def received_announce(self, from_spawned=False):
if from_spawned: self.ia_freq_deque.append(time.time())
def sent_announce(self, from_spawned=False):
if from_spawned: self.oa_freq_deque.append(time.time())
def processOutgoing(self, data):
pass
def detach(self):
if self.server != None:
if hasattr(self.server, "shutdown"):
if callable(self.server.shutdown):
try:
RNS.log("Detaching "+str(self), RNS.LOG_DEBUG)
self.server.shutdown()
self.detached = True
self.server = None
except Exception as e:
RNS.log("Error while shutting down server for "+str(self)+": "+str(e))
def __str__(self):
return "TCPServerInterface["+self.name+"/"+self.bind_ip+":"+str(self.bind_port)+"]"
class TCPInterfaceHandler(socketserver.BaseRequestHandler):
def __init__(self, callback, *args, **keys):
self.callback = callback

View File

@ -34,29 +34,21 @@ class UDPInterface(Interface):
@staticmethod
def get_address_for_if(name):
import importlib
if importlib.util.find_spec('netifaces') != None:
import netifaces
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['addr']
else:
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
RNS.panic()
import RNS.vendor.ifaddr.niwrapper as netinfo
ifaddr = netinfo.ifaddresses(name)
return ifaddr[netinfo.AF_INET][0]["addr"]
@staticmethod
def get_broadcast_for_if(name):
import importlib
if importlib.util.find_spec('netifaces') != None:
import netifaces
return netifaces.ifaddresses(name)[netifaces.AF_INET][0]['broadcast']
else:
RNS.log("Getting interface addresses from device names requires the netifaces module.", RNS.LOG_CRITICAL)
RNS.log("You can install it with the command: python3 -m pip install netifaces", RNS.LOG_CRITICAL)
RNS.panic()
import RNS.vendor.ifaddr.niwrapper as netinfo
ifaddr = netinfo.ifaddresses(name)
return ifaddr[netinfo.AF_INET][0]["broadcast"]
def __init__(self, owner, name, device=None, bindip=None, bindport=None, forwardip=None, forwardport=None):
self.rxb = 0
self.txb = 0
super().__init__()
self.HW_MTU = 1064
self.IN = True
self.OUT = False
self.name = name
@ -86,7 +78,7 @@ class UDPInterface(Interface):
self.server = socketserver.UDPServer(address, handlerFactory(self.processIncoming))
thread = threading.Thread(target=self.server.serve_forever)
thread.setDaemon(True)
thread.daemon = True
thread.start()
self.online = True

View File

@ -22,6 +22,7 @@
import os
import glob
import RNS.Interfaces.Android
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -29,15 +29,19 @@ import RNS
class Packet:
"""
The Packet class is used to create packet instances that can be sent
over a Reticulum network. Packets to will automatically be encrypted if
they are adressed to a ``RNS.Destination.SINGLE`` destination,
over a Reticulum network. Packets will automatically be encrypted if
they are addressed to a ``RNS.Destination.SINGLE`` destination,
``RNS.Destination.GROUP`` destination or a :ref:`RNS.Link<api-link>`.
For ``RNS.Destination.GROUP`` destinations, Reticulum will use the
pre-shared key configured for the destination.
pre-shared key configured for the destination. All packets to group
destinations are encrypted with the same AES-128 key.
For ``RNS.Destination.SINGLE`` destinations and :ref:`RNS.Link<api-link>`
destinations, reticulum will use ephemeral keys, and offers **Forward Secrecy**.
For ``RNS.Destination.SINGLE`` destinations, Reticulum will use a newly
derived ephemeral AES-128 key for every packet.
For :ref:`RNS.Link<api-link>` destinations, Reticulum will use per-link
ephemeral keys, and offers **Forward Secrecy**.
:param destination: A :ref:`RNS.Destination<api-destination>` instance to which the packet will be sent.
:param data: The data payload to be included in the packet as *bytes*.
@ -54,9 +58,7 @@ class Packet:
# Header types
HEADER_1 = 0x00 # Normal header format
HEADER_2 = 0x01 # Header format used for packets in transport
HEADER_3 = 0x02 # Reserved
HEADER_4 = 0x03 # Reserved
header_types = [HEADER_1, HEADER_2, HEADER_3, HEADER_4]
header_types = [HEADER_1, HEADER_2]
# Packet context types
NONE = 0x00 # Generic data packet
@ -73,6 +75,7 @@ class Packet:
PATH_RESPONSE = 0x0B # Packet is a response to a path request
COMMAND = 0x0C # Packet is a command
COMMAND_STATUS = 0x0D # Packet is a status of an executed command
CHANNEL = 0x0E # Packet contains link channel data
KEEPALIVE = 0xFA # Packet is a keepalive packet
LINKIDENTIFY = 0xFB # Packet is a link peer identification proof
LINKCLOSE = 0xFC # Packet is a link close message
@ -135,10 +138,11 @@ class Packet:
self.receiving_interface = None
self.rssi = None
self.snr = None
self.q = None
def get_packed_flags(self):
if self.context == Packet.LRPROOF:
packed_flags = (self.header_type << 6) | (self.transport_type << 4) | RNS.Destination.LINK | self.packet_type
packed_flags = (self.header_type << 6) | (self.transport_type << 4) | (RNS.Destination.LINK << 2) | self.packet_type
else:
packed_flags = (self.header_type << 6) | (self.transport_type << 4) | (self.destination.type << 2) | self.packet_type
return packed_flags
@ -169,8 +173,8 @@ class Packet:
# Packet proofs over links are not encrypted
self.ciphertext = self.data
elif self.context == Packet.RESOURCE:
# A resource takes care of symmetric
# encryption by itself
# A resource takes care of encryption
# by itself
self.ciphertext = self.data
elif self.context == Packet.KEEPALIVE:
# Keepalive packets contain no actual
@ -211,21 +215,23 @@ class Packet:
self.flags = self.raw[0]
self.hops = self.raw[1]
self.header_type = (self.flags & 0b11000000) >> 6
self.header_type = (self.flags & 0b01000000) >> 6
self.transport_type = (self.flags & 0b00110000) >> 4
self.destination_type = (self.flags & 0b00001100) >> 2
self.packet_type = (self.flags & 0b00000011)
DST_LEN = RNS.Reticulum.TRUNCATED_HASHLENGTH//8
if self.header_type == Packet.HEADER_2:
self.transport_id = self.raw[2:12]
self.destination_hash = self.raw[12:22]
self.context = ord(self.raw[22:23])
self.data = self.raw[23:]
self.transport_id = self.raw[2:DST_LEN+2]
self.destination_hash = self.raw[DST_LEN+2:2*DST_LEN+2]
self.context = ord(self.raw[2*DST_LEN+2:2*DST_LEN+3])
self.data = self.raw[2*DST_LEN+3:]
else:
self.transport_id = None
self.destination_hash = self.raw[2:12]
self.context = ord(self.raw[12:13])
self.data = self.raw[13:]
self.destination_hash = self.raw[2:DST_LEN+2]
self.context = ord(self.raw[DST_LEN+2:DST_LEN+3])
self.data = self.raw[DST_LEN+3:]
self.packed = False
self.update_hash()
@ -252,7 +258,7 @@ class Packet:
if not self.packed:
self.pack()
if RNS.Transport.outbound(self):
return self.receipt
else:
@ -271,6 +277,10 @@ class Packet:
:returns: A :ref:`RNS.PacketReceipt<api-packetreceipt>` instance if *create_receipt* was set to *True* when the packet was instantiated, if not returns *None*. If the packet could not be sent *False* is returned.
"""
if self.sent:
# Re-pack the packet to obtain new ciphertext for
# encrypted destinations
self.pack()
if RNS.Transport.outbound(self):
return self.receipt
else:
@ -313,7 +323,7 @@ class Packet:
def get_hashable_part(self):
hashable_part = bytes([self.raw[0] & 0b00001111])
if self.header_type == Packet.HEADER_2:
hashable_part += self.raw[12:]
hashable_part += self.raw[(RNS.Identity.TRUNCATED_HASHLENGTH//8)+2:]
else:
hashable_part += self.raw[2:]
@ -321,7 +331,7 @@ class Packet:
class ProofDestination:
def __init__(self, packet):
self.hash = packet.get_hash()[:10];
self.hash = packet.get_hash()[:RNS.Reticulum.TRUNCATED_HASHLENGTH//8];
self.type = RNS.Destination.SINGLE
def encrypt(self, plaintext):
@ -361,8 +371,8 @@ class PacketReceipt:
if packet.destination.type == RNS.Destination.LINK:
self.timeout = packet.destination.rtt * packet.destination.traffic_timeout_factor
else:
self.timeout = Packet.TIMEOUT_PER_HOP * RNS.Transport.hops_to(self.destination.hash)
self.timeout = RNS.Reticulum.get_instance().get_first_hop_timeout(self.destination.hash)
self.timeout += Packet.TIMEOUT_PER_HOP * RNS.Transport.hops_to(self.destination.hash)
def get_status(self):
"""
@ -391,9 +401,15 @@ class PacketReceipt:
self.proved = True
self.concluded_at = time.time()
self.proof_packet = proof_packet
link.last_proof = self.concluded_at
if self.callbacks.delivery != None:
self.callbacks.delivery(self)
try:
self.callbacks.delivery(self)
except Exception as e:
RNS.log("An error occurred while evaluating external delivery callback for "+str(link), RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
return True
else:
return False
@ -488,7 +504,7 @@ class PacketReceipt:
if self.callbacks.timeout:
thread = threading.Thread(target=self.callbacks.timeout, args=(self,))
thread.setDaemon(True)
thread.daemon = True
thread.start()

27
RNS/Resolver.py Normal file
View File

@ -0,0 +1,27 @@
# MIT License
#
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
class Resolver:
@staticmethod
def resolve_identity(full_name):
pass

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -25,7 +25,9 @@ import os
import bz2
import math
import time
import tempfile
import threading
from threading import Lock
from .vendor import umsgpack as umsgpack
from time import sleep
@ -42,10 +44,39 @@ class Resource:
:param callback: An optional *callable* with the signature *callback(resource)*. Will be called when the resource transfer concludes.
:param progress_callback: An optional *callable* with the signature *callback(resource)*. Will be called whenever the resource transfer progress is updated.
"""
WINDOW_FLEXIBILITY = 4
WINDOW_MIN = 1
WINDOW_MAX = 10
# The initial window size at beginning of transfer
WINDOW = 4
# Absolute minimum window size during transfer
WINDOW_MIN = 2
# The maximum window size for transfers on slow links
WINDOW_MAX_SLOW = 10
# The maximum window size for transfers on fast links
WINDOW_MAX_FAST = 75
# For calculating maps and guard segments, this
# must be set to the global maximum window.
WINDOW_MAX = WINDOW_MAX_FAST
# If the fast rate is sustained for this many request
# rounds, the fast link window size will be allowed.
FAST_RATE_THRESHOLD = WINDOW_MAX_SLOW - WINDOW - 2
# If the RTT rate is higher than this value,
# the max window size for fast links will be used.
# The default is 50 Kbps (the value is stored in
# bytes per second, hence the "/ 8").
RATE_FAST = (50*1000) / 8
# The minimum allowed flexibility of the window size.
# The difference between window_max and window_min
# will never be smaller than this value.
WINDOW_FLEXIBILITY = 4
# Number of bytes in a map hash
MAPHASH_LEN = 4
SDU = RNS.Packet.MDU
RANDOM_HASH_SIZE = 4
@ -74,9 +105,12 @@ class Resource:
PART_TIMEOUT_FACTOR = 4
PART_TIMEOUT_FACTOR_AFTER_RTT = 2
MAX_RETRIES = 5
SENDER_GRACE_TIME = 10
MAX_RETRIES = 16
MAX_ADV_RETRIES = 4
SENDER_GRACE_TIME = 10.0
PROCESSING_GRACE = 1.0
RETRY_GRACE_TIME = 0.25
PER_RETRY_DELAY = 0.5
WATCHDOG_MAX_SLEEP = 1
@ -118,9 +152,9 @@ class Resource:
resource.total_parts = int(math.ceil(resource.size/float(Resource.SDU)))
resource.received_count = 0
resource.outstanding_parts = 0
resource.parts = [None] * resource.total_parts
resource.parts = [None] * resource.total_parts
resource.window = Resource.WINDOW
resource.window_max = Resource.WINDOW_MAX
resource.window_max = Resource.WINDOW_MAX_SLOW
resource.window_min = Resource.WINDOW_MIN
resource.window_flexibility = Resource.WINDOW_FLEXIBILITY
resource.last_activity = time.time()
@ -136,25 +170,29 @@ class Resource:
resource.hashmap = [None] * resource.total_parts
resource.hashmap_height = 0
resource.waiting_for_hmu = False
resource.receiving_part = False
resource.consecutive_completed_height = 0
resource.consecutive_completed_height = -1
resource.link.register_incoming_resource(resource)
if not resource.link.has_incoming_resource(resource):
resource.link.register_incoming_resource(resource)
RNS.log("Accepting resource advertisement for "+RNS.prettyhexrep(resource.hash), RNS.LOG_DEBUG)
if resource.link.callbacks.resource_started != None:
try:
resource.link.callbacks.resource_started(resource)
except Exception as e:
RNS.log("Error while executing resource started callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
RNS.log(f"Accepting resource advertisement for {RNS.prettyhexrep(resource.hash)}. Transfer size is {RNS.prettysize(resource.size)} in {resource.total_parts} parts.", RNS.LOG_DEBUG)
if resource.link.callbacks.resource_started != None:
try:
resource.link.callbacks.resource_started(resource)
except Exception as e:
RNS.log("Error while executing resource started callback from "+str(resource)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
resource.hashmap_update(0, resource.hashmap_raw)
resource.hashmap_update(0, resource.hashmap_raw)
resource.watchdog_job()
resource.watchdog_job()
return resource
else:
RNS.log("Ignoring resource advertisement for "+RNS.prettyhexrep(resource.hash)+", resource already transferring", RNS.LOG_DEBUG)
return None
return resource
except Exception as e:
RNS.log("Could not decode resource advertisement, dropping resource", RNS.LOG_DEBUG)
return None
@ -167,9 +205,19 @@ class Resource:
resource_data = None
self.assembly_lock = False
if data != None:
if not hasattr(data, "read") and len(data) > Resource.MAX_EFFICIENT_SIZE:
original_data = data
data_size = len(original_data)
data = tempfile.TemporaryFile()
data.write(original_data)
del original_data
if hasattr(data, "read"):
data_size = os.stat(data.name).st_size
self.total_size = data_size
if data_size == None:
data_size = os.stat(data.name).st_size
self.total_size = data_size
self.grand_total_parts = math.ceil(data_size/Resource.SDU)
if data_size <= Resource.MAX_EFFICIENT_SIZE:
@ -210,6 +258,7 @@ class Resource:
self.status = Resource.NONE
self.link = link
self.max_retries = Resource.MAX_RETRIES
self.max_adv_retries = Resource.MAX_ADV_RETRIES
self.retries_left = self.max_retries
self.timeout_factor = self.link.traffic_timeout_factor
self.part_timeout_factor = Resource.PART_TIMEOUT_FACTOR
@ -219,6 +268,11 @@ class Resource:
self.__watchdog_job_id = 0
self.__progress_callback = progress_callback
self.rtt = None
self.rtt_rxd_bytes = 0
self.req_sent = 0
self.req_resp_rtt_rate = 0
self.rtt_rxd_bytes_at_part_req = 0
self.fast_rate_rounds = 0
self.request_id = request_id
self.is_response = is_response
@ -236,7 +290,7 @@ class Resource:
self.uncompressed_data = data
compression_began = time.time()
if (auto_compress and len(self.uncompressed_data) < Resource.AUTO_COMPRESS_MAX_SIZE):
if (auto_compress and len(self.uncompressed_data) <= Resource.AUTO_COMPRESS_MAX_SIZE):
RNS.log("Compressing resource data...", RNS.LOG_DEBUG)
self.compressed_data = bz2.compress(self.uncompressed_data)
RNS.log("Compression completed in "+str(round(time.time()-compression_began, 3))+" seconds", RNS.LOG_DEBUG)
@ -323,7 +377,8 @@ class Resource:
if advertise:
self.advertise()
else:
pass
self.receive_lock = Lock()
def hashmap_update_packet(self, plaintext):
if not self.status == Resource.FAILED:
@ -356,12 +411,11 @@ class Resource:
the resource advertisement it will begin transferring.
"""
thread = threading.Thread(target=self.__advertise_job)
thread.setDaemon(True)
thread.daemon = True
thread.start()
def __advertise_job(self):
data = ResourceAdvertisement(self).pack()
self.advertisement_packet = RNS.Packet(self.link, data, context=RNS.Packet.RESOURCE_ADV)
self.advertisement_packet = RNS.Packet(self.link, ResourceAdvertisement(self).pack(), context=RNS.Packet.RESOURCE_ADV)
while not self.link.ready_for_new_resource():
self.status = Resource.QUEUED
sleep(0.25)
@ -372,6 +426,7 @@ class Resource:
self.adv_sent = self.last_activity
self.rtt = None
self.status = Resource.ADVERTISED
self.retries_left = self.max_adv_retries
self.link.register_outgoing_resource(self)
RNS.log("Sent resource advertisement for "+RNS.prettyhexrep(self.hash), RNS.LOG_DEBUG)
except Exception as e:
@ -383,7 +438,7 @@ class Resource:
def watchdog_job(self):
thread = threading.Thread(target=self.__watchdog_job)
thread.setDaemon(True)
thread.daemon = True
thread.start()
def __watchdog_job(self):
@ -397,7 +452,7 @@ class Resource:
sleep_time = None
if self.status == Resource.ADVERTISED:
sleep_time = (self.adv_sent+self.timeout)-time.time()
sleep_time = (self.adv_sent+self.timeout+Resource.PROCESSING_GRACE)-time.time()
if sleep_time < 0:
if self.retries_left <= 0:
RNS.log("Resource transfer timeout after sending advertisement", RNS.LOG_DEBUG)
@ -407,12 +462,13 @@ class Resource:
try:
RNS.log("No part requests received, retrying resource advertisement...", RNS.LOG_DEBUG)
self.retries_left -= 1
self.advertisement_packet.resend()
self.advertisement_packet = RNS.Packet(self.link, ResourceAdvertisement(self).pack(), context=RNS.Packet.RESOURCE_ADV)
self.advertisement_packet.send()
self.last_activity = time.time()
self.adv_sent = self.last_activity
sleep_time = 0.001
except Exception as e:
RNS.log("Could not resend advertisement packet, cancelling resource", RNS.LOG_VERBOSE)
RNS.log("Could not resend advertisement packet, cancelling resource. The contained exception was: "+str(e), RNS.LOG_VERBOSE)
self.cancel()
@ -426,11 +482,14 @@ class Resource:
window_remaining = self.outstanding_parts
sleep_time = self.last_activity + (rtt*(self.part_timeout_factor+window_remaining)) + Resource.RETRY_GRACE_TIME - time.time()
retries_used = self.max_retries - self.retries_left
extra_wait = retries_used * Resource.PER_RETRY_DELAY
sleep_time = self.last_activity + (rtt*(self.part_timeout_factor+window_remaining)) + Resource.RETRY_GRACE_TIME + extra_wait - time.time()
if sleep_time < 0:
if self.retries_left > 0:
RNS.log("Timed out waiting for parts, requesting retry", RNS.LOG_DEBUG)
ms = "" if self.outstanding_parts == 1 else "s"
RNS.log("Timed out waiting for "+str(self.outstanding_parts)+" part"+ms+", requesting retry", RNS.LOG_DEBUG)
if self.window > self.window_min:
self.window -= 1
if self.window_max > self.window_min:
@ -446,7 +505,8 @@ class Resource:
self.cancel()
sleep_time = 0.001
else:
max_wait = self.rtt * self.timeout_factor * self.max_retries + self.sender_grace_time
max_extra_wait = sum([(r+1) * Resource.PER_RETRY_DELAY for r in range(self.MAX_RETRIES)])
max_wait = self.rtt * self.timeout_factor * self.max_retries + self.sender_grace_time + max_extra_wait
sleep_time = self.last_activity + max_wait - time.time()
if sleep_time < 0:
RNS.log("Resource timed out waiting for part requests", RNS.LOG_DEBUG)
@ -526,8 +586,11 @@ class Resource:
RNS.log("Error while executing resource assembled callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
try:
self.data.close()
if hasattr(self.data, "close") and callable(self.data.close):
self.data.close()
os.unlink(self.storagepath)
except Exception as e:
RNS.log("Error while cleaning up resource files, the contained exception was:", RNS.LOG_ERROR)
RNS.log(str(e))
@ -561,10 +624,26 @@ class Resource:
self.callback(self)
except Exception as e:
RNS.log("Error while executing resource concluded callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
finally:
try:
if hasattr(self, "input_file"):
if hasattr(self.input_file, "close") and callable(self.input_file.close):
self.input_file.close()
except Exception as e:
RNS.log("Error while closing resource input file: "+str(e), RNS.LOG_ERROR)
else:
# Otherwise we'll recursively create the
# next segment of the resource
Resource(self.input_file, self.link, callback = self.callback, segment_index = self.segment_index+1, original_hash=self.original_hash)
Resource(
self.input_file, self.link,
callback = self.callback,
segment_index = self.segment_index+1,
original_hash=self.original_hash,
progress_callback = self.__progress_callback,
request_id = self.request_id,
is_response = self.is_response,
)
else:
pass
else:
@ -572,73 +651,99 @@ class Resource:
def receive_part(self, packet):
while self.receiving_part:
sleep(0.001)
with self.receive_lock:
self.receiving_part = True
self.last_activity = time.time()
self.retries_left = self.max_retries
self.receiving_part = True
self.last_activity = time.time()
self.retries_left = self.max_retries
if self.req_resp == None:
self.req_resp = self.last_activity
rtt = self.req_resp-self.req_sent
if self.req_resp == None:
self.req_resp = self.last_activity
rtt = self.req_resp-self.req_sent
self.part_timeout_factor = Resource.PART_TIMEOUT_FACTOR_AFTER_RTT
if self.rtt == None:
self.rtt = self.link.rtt
self.watchdog_job()
elif rtt < self.rtt:
self.rtt = max(self.rtt - self.rtt*0.05, rtt)
elif rtt > self.rtt:
self.rtt = min(self.rtt + self.rtt*0.05, rtt)
self.part_timeout_factor = Resource.PART_TIMEOUT_FACTOR_AFTER_RTT
if self.rtt == None:
self.rtt = self.link.rtt
self.watchdog_job()
elif rtt < self.rtt:
self.rtt = max(self.rtt - self.rtt*0.05, rtt)
elif rtt > self.rtt:
self.rtt = min(self.rtt + self.rtt*0.05, rtt)
if rtt > 0:
req_resp_cost = len(packet.raw)+self.req_sent_bytes
self.req_resp_rtt_rate = req_resp_cost / rtt
if not self.status == Resource.FAILED:
self.status = Resource.TRANSFERRING
part_data = packet.data
part_hash = self.get_map_hash(part_data)
if self.req_resp_rtt_rate > Resource.RATE_FAST and self.fast_rate_rounds < Resource.FAST_RATE_THRESHOLD:
self.fast_rate_rounds += 1
i = self.consecutive_completed_height
for map_hash in self.hashmap[self.consecutive_completed_height:self.consecutive_completed_height+self.window]:
if map_hash == part_hash:
if self.parts[i] == None:
# Insert data into parts list
self.parts[i] = part_data
self.received_count += 1
self.outstanding_parts -= 1
if self.fast_rate_rounds == Resource.FAST_RATE_THRESHOLD:
self.window_max = Resource.WINDOW_MAX_FAST
# Update consecutive completed pointer
if i == self.consecutive_completed_height + 1:
self.consecutive_completed_height = i
cp = self.consecutive_completed_height + 1
while cp < len(self.parts) and self.parts[cp] != None:
self.consecutive_completed_height = cp
cp += 1
if not self.status == Resource.FAILED:
self.status = Resource.TRANSFERRING
part_data = packet.data
part_hash = self.get_map_hash(part_data)
if self.__progress_callback != None:
try:
self.__progress_callback(self)
except Exception as e:
RNS.log("Error while executing progress callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
consecutive_index = self.consecutive_completed_height if self.consecutive_completed_height >= 0 else 0
i = consecutive_index
for map_hash in self.hashmap[consecutive_index:consecutive_index+self.window]:
if map_hash == part_hash:
if self.parts[i] == None:
i += 1
# Insert data into parts list
self.parts[i] = part_data
self.rtt_rxd_bytes += len(part_data)
self.received_count += 1
self.outstanding_parts -= 1
self.receiving_part = False
# Update consecutive completed pointer
if i == self.consecutive_completed_height + 1:
self.consecutive_completed_height = i
cp = self.consecutive_completed_height + 1
while cp < len(self.parts) and self.parts[cp] != None:
self.consecutive_completed_height = cp
cp += 1
if self.received_count == self.total_parts and not self.assembly_lock:
self.assembly_lock = True
self.assemble()
elif self.outstanding_parts == 0:
# TODO: Figure out if there is a mathematically
# optimal way to adjust windows
if self.window < self.window_max:
self.window += 1
if (self.window - self.window_min) > (self.window_flexibility-1):
self.window_min += 1
if self.__progress_callback != None:
try:
self.__progress_callback(self)
except Exception as e:
RNS.log("Error while executing progress callback from "+str(self)+". The contained exception was: "+str(e), RNS.LOG_ERROR)
self.request_next()
else:
self.receiving_part = False
i += 1
self.receiving_part = False
if self.received_count == self.total_parts and not self.assembly_lock:
self.assembly_lock = True
self.assemble()
elif self.outstanding_parts == 0:
# TODO: Figure out if there is a mathematically
# optimal way to adjust windows
if self.window < self.window_max:
self.window += 1
if (self.window - self.window_min) > (self.window_flexibility-1):
self.window_min += 1
if self.req_sent != 0:
rtt = time.time()-self.req_sent
req_transferred = self.rtt_rxd_bytes - self.rtt_rxd_bytes_at_part_req
if rtt != 0:
self.req_data_rtt_rate = req_transferred/rtt
self.rtt_rxd_bytes_at_part_req = self.rtt_rxd_bytes
if self.req_data_rtt_rate > Resource.RATE_FAST and self.fast_rate_rounds < Resource.FAST_RATE_THRESHOLD:
self.fast_rate_rounds += 1
if self.fast_rate_rounds == Resource.FAST_RATE_THRESHOLD:
self.window_max = Resource.WINDOW_MAX_FAST
self.request_next()
else:
self.receiving_part = False
# Called on incoming resource to send a request for more data
def request_next(self):
@ -651,11 +756,11 @@ class Resource:
hashmap_exhausted = Resource.HASHMAP_IS_NOT_EXHAUSTED
requested_hashes = b""
offset = (1 if self.consecutive_completed_height > 0 else 0)
i = 0; pn = self.consecutive_completed_height+offset
i = 0; pn = self.consecutive_completed_height+1
search_start = pn
for part in self.parts[search_start:search_start+self.window]:
search_size = self.window
for part in self.parts[search_start:search_start+search_size]:
if part == None:
part_hash = self.hashmap[pn]
if part_hash != None:
@ -675,7 +780,6 @@ class Resource:
hmu_part += last_map_hash
self.waiting_for_hmu = True
requested_data = b""
request_data = hmu_part + self.hash + requested_hashes
request_packet = RNS.Packet(self.link, request_data, context = RNS.Packet.RESOURCE_REQ)
@ -683,7 +787,9 @@ class Resource:
request_packet.send()
self.last_activity = time.time()
self.req_sent = self.last_activity
self.req_sent_bytes = len(request_packet.raw)
self.req_resp = None
except Exception as e:
RNS.log("Could not send resource request packet, cancelling resource", RNS.LOG_DEBUG)
RNS.log("The contained exception was: "+str(e), RNS.LOG_DEBUG)
@ -707,27 +813,34 @@ class Resource:
requested_hashes = request_data[pad+RNS.Identity.HASHLENGTH//8:]
for i in range(0,len(requested_hashes)//Resource.MAPHASH_LEN):
requested_hash = requested_hashes[i*Resource.MAPHASH_LEN:(i+1)*Resource.MAPHASH_LEN]
search_start = self.receiver_min_consecutive_height
search_end = self.receiver_min_consecutive_height+ResourceAdvertisement.COLLISION_GUARD_SIZE
for part in self.parts[search_start:search_end]:
if part.map_hash == requested_hash:
try:
if not part.sent:
part.send()
self.sent_parts += 1
else:
part.resend()
self.last_activity = time.time()
self.last_part_sent = self.last_activity
break
except Exception as e:
RNS.log("Resource could not send parts, cancelling transfer!", RNS.LOG_DEBUG)
RNS.log("The contained exception was: "+str(e), RNS.LOG_DEBUG)
self.cancel()
# Define the search scope
search_start = self.receiver_min_consecutive_height
search_end = self.receiver_min_consecutive_height+ResourceAdvertisement.COLLISION_GUARD_SIZE
map_hashes = []
for i in range(0,len(requested_hashes)//Resource.MAPHASH_LEN):
map_hash = requested_hashes[i*Resource.MAPHASH_LEN:(i+1)*Resource.MAPHASH_LEN]
map_hashes.append(map_hash)
search_scope = self.parts[search_start:search_end]
requested_parts = list(filter(lambda part: part.map_hash in map_hashes, search_scope))
for part in requested_parts:
try:
if not part.sent:
part.send()
self.sent_parts += 1
else:
part.resend()
self.last_activity = time.time()
self.last_part_sent = self.last_activity
except Exception as e:
RNS.log("Resource could not send parts, cancelling transfer!", RNS.LOG_DEBUG)
RNS.log("The contained exception was: "+str(e), RNS.LOG_DEBUG)
self.cancel()
if wants_more_hashmap:
last_map_hash = request_data[1:Resource.MAPHASH_LEN+1]
@ -822,16 +935,51 @@ class Resource:
else:
self.progress_total_parts = float(self.total_parts)
progress = self.processed_parts / self.progress_total_parts
progress = min(1.0, self.processed_parts / self.progress_total_parts)
return progress
def get_transfer_size(self):
"""
:returns: The number of bytes needed to transfer the resource.
"""
return self.size
def get_data_size(self):
"""
:returns: The total data size of the resource.
"""
return self.total_size
def get_parts(self):
"""
:returns: The number of parts the resource will be transferred in.
"""
return self.total_parts
def get_segments(self):
"""
:returns: The number of segments the resource is divided into.
"""
return self.total_segments
def get_hash(self):
"""
:returns: The hash of the resource.
"""
return self.hash
def is_compressed(self):
"""
:returns: Whether the resource is compressed.
"""
return self.compressed
def __str__(self):
return "<"+RNS.hexrep(self.hash)+"/"+RNS.hexrep(self.link.link_id)+">"
return "<"+RNS.hexrep(self.hash,delimit=False)+"/"+RNS.hexrep(self.link.link_id,delimit=False)+">"
class ResourceAdvertisement:
OVERHEAD = 128
OVERHEAD = 134
HASHMAP_MAX_LEN = math.floor((RNS.Link.MDU-OVERHEAD)/Resource.MAPHASH_LEN)
COLLISION_GUARD_SIZE = 2*Resource.WINDOW_MAX+HASHMAP_MAX_LEN
@ -857,24 +1005,25 @@ class ResourceAdvertisement:
@staticmethod
def get_request_id(advertisement_packet):
def read_request_id(advertisement_packet):
adv = ResourceAdvertisement.unpack(advertisement_packet.plaintext)
return adv.q
@staticmethod
def get_transfer_size(advertisement_packet):
def read_transfer_size(advertisement_packet):
adv = ResourceAdvertisement.unpack(advertisement_packet.plaintext)
return adv.t
@staticmethod
def get_size(advertisement_packet):
def read_size(advertisement_packet):
adv = ResourceAdvertisement.unpack(advertisement_packet.plaintext)
return adv.d
def __init__(self, resource=None, request_id=None, is_response=False):
self.link = None
if resource != None:
self.t = resource.size # Transfer size
self.d = resource.total_size # Total uncompressed data size
@ -903,6 +1052,26 @@ class ResourceAdvertisement:
# Flags
self.f = 0x00 | self.p << 4 | self.u << 3 | self.s << 2 | self.c << 1 | self.e
def get_transfer_size(self):
return self.t
def get_data_size(self):
return self.d
def get_parts(self):
return self.n
def get_segments(self):
return self.l
def get_hash(self):
return self.h
def is_compressed(self):
return self.c
def get_link(self):
return self.link
def pack(self, segment=0):
hashmap_start = segment*ResourceAdvertisement.HASHMAP_MAX_LEN

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

744
RNS/Utilities/rncp.py Normal file
View File

@ -0,0 +1,744 @@
#!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import RNS
import argparse
import threading
import time
import sys
import os
from RNS._version import __version__
APP_NAME = "rncp"
allow_all = False
allowed_identity_hashes = []
def listen(configdir, verbosity = 0, quietness = 0, allowed = [], display_identity = False, limit = None, disable_auth = None, announce = False):
global allow_all, allowed_identity_hashes
from tempfile import TemporaryFile
identity = None
if announce < 0:
announce = False
targetloglevel = 3+verbosity-quietness
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
identity_path = RNS.Reticulum.identitypath+"/"+APP_NAME
if os.path.isfile(identity_path):
identity = RNS.Identity.from_file(identity_path)
if identity == None:
RNS.log("No valid saved identity found, creating new...", RNS.LOG_INFO)
identity = RNS.Identity()
identity.to_file(identity_path)
destination = RNS.Destination(identity, RNS.Destination.IN, RNS.Destination.SINGLE, APP_NAME, "receive")
if display_identity:
print("Identity : "+str(identity))
print("Listening on : "+RNS.prettyhexrep(destination.hash))
exit(0)
if disable_auth:
allow_all = True
else:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
try:
allowed_file_name = "allowed_identities"
allowed_file = None
if os.path.isfile(os.path.expanduser("/etc/rncp/"+allowed_file_name)):
allowed_file = os.path.expanduser("/etc/rncp/"+allowed_file_name)
elif os.path.isfile(os.path.expanduser("~/.config/rncp/"+allowed_file_name)):
allowed_file = os.path.expanduser("~/.config/rncp/"+allowed_file_name)
elif os.path.isfile(os.path.expanduser("~/.rncp/"+allowed_file_name)):
allowed_file = os.path.expanduser("~/.rncp/"+allowed_file_name)
if allowed_file != None:
af = open(allowed_file, "r")
al = af.read().replace("\r", "").split("\n")
ali = []
for a in al:
if len(a) == dest_len:
ali.append(a)
if len(ali) > 0:
if not allowed:
allowed = ali
else:
allowed.extend(ali)
if len(ali) == 1:
ms = "y"
else:
ms = "ies"
RNS.log("Loaded "+str(len(ali))+" allowed identit"+ms+" from "+str(allowed_file), RNS.LOG_VERBOSE)
except Exception as e:
RNS.log("Error while parsing allowed_identities file. The contained exception was: "+str(e), RNS.LOG_ERROR)
if allowed != None:
for a in allowed:
try:
if len(a) != dest_len:
raise ValueError("Allowed destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
try:
destination_hash = bytes.fromhex(a)
allowed_identity_hashes.append(destination_hash)
except Exception as e:
raise ValueError("Invalid destination entered. Check your input.")
except Exception as e:
print(str(e))
exit(1)
if len(allowed_identity_hashes) < 1 and not disable_auth:
print("Warning: No allowed identities configured, rncp will not accept any files!")
def fetch_request(path, data, request_id, link_id, remote_identity, requested_at):
target_link = None
for link in RNS.Transport.active_links:
if link.link_id == link_id:
target_link = link
file_path = os.path.expanduser(data)
if not os.path.isfile(file_path):
RNS.log("Client-requested file not found: "+str(file_path), RNS.LOG_VERBOSE)
return False
else:
if target_link != None:
RNS.log("Sending file "+str(file_path)+" to client", RNS.LOG_VERBOSE)
temp_file = TemporaryFile()
real_file = open(file_path, "rb")
filename_bytes = os.path.basename(file_path).encode("utf-8")
filename_len = len(filename_bytes)
if filename_len > 0xFFFF:
print("Filename exceeds max size, cannot send")
exit(1)
else:
print("Preparing file...", end=" ")
temp_file.write(filename_len.to_bytes(2, "big"))
temp_file.write(filename_bytes)
temp_file.write(real_file.read())
temp_file.seek(0)
fetch_resource = RNS.Resource(temp_file, target_link)
return True
else:
return None
destination.set_link_established_callback(client_link_established)
destination.register_request_handler("fetch_file", response_generator=fetch_request, allow=RNS.Destination.ALLOW_LIST, allowed_list=allowed_identity_hashes)
print("rncp listening on "+RNS.prettyhexrep(destination.hash))
if announce >= 0:
def job():
destination.announce()
if announce > 0:
while True:
time.sleep(announce)
destination.announce()
threading.Thread(target=job, daemon=True).start()
while True:
time.sleep(1)
def client_link_established(link):
RNS.log("Incoming link established", RNS.LOG_VERBOSE)
link.set_remote_identified_callback(receive_sender_identified)
link.set_resource_strategy(RNS.Link.ACCEPT_APP)
link.set_resource_callback(receive_resource_callback)
link.set_resource_started_callback(receive_resource_started)
link.set_resource_concluded_callback(receive_resource_concluded)
def receive_sender_identified(link, identity):
global allow_all
if identity.hash in allowed_identity_hashes:
RNS.log("Authenticated sender", RNS.LOG_VERBOSE)
else:
if not allow_all:
RNS.log("Sender not allowed, tearing down link", RNS.LOG_VERBOSE)
link.teardown()
else:
pass
def receive_resource_callback(resource):
global allow_all
sender_identity = resource.link.get_remote_identity()
if sender_identity != None:
if sender_identity.hash in allowed_identity_hashes:
return True
if allow_all:
return True
return False
def receive_resource_started(resource):
if resource.link.get_remote_identity():
id_str = " from "+RNS.prettyhexrep(resource.link.get_remote_identity().hash)
else:
id_str = ""
print("Starting resource transfer "+RNS.prettyhexrep(resource.hash)+id_str)
def receive_resource_concluded(resource):
if resource.status == RNS.Resource.COMPLETE:
print(str(resource)+" completed")
if resource.total_size > 4:
filename_len = int.from_bytes(resource.data.read(2), "big")
filename = resource.data.read(filename_len).decode("utf-8")
counter = 0
saved_filename = filename
while os.path.isfile(saved_filename):
counter += 1
saved_filename = filename+"."+str(counter)
file = open(saved_filename, "wb")
file.write(resource.data.read())
file.close()
else:
print("Invalid data received, ignoring resource")
else:
print("Resource failed")
resource_done = False
current_resource = None
stats = []
speed = 0.0
def sender_progress(resource):
stats_max = 32
global current_resource, stats, speed, resource_done
current_resource = resource
now = time.time()
got = current_resource.get_progress()*current_resource.total_size
entry = [now, got]
stats.append(entry)
while len(stats) > stats_max:
stats.pop(0)
span = now - stats[0][0]
if span == 0:
speed = 0
else:
diff = got - stats[0][1]
speed = diff/span
if resource.status < RNS.Resource.COMPLETE:
resource_done = False
else:
resource_done = True
link = None
def fetch(configdir, verbosity = 0, quietness = 0, destination = None, file = None, timeout = RNS.Transport.PATH_REQUEST_TIMEOUT, silent=False):
global current_resource, resource_done, link, speed
targetloglevel = 3+verbosity-quietness
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination) != dest_len:
raise ValueError("Allowed destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
try:
destination_hash = bytes.fromhex(destination)
except Exception as e:
raise ValueError("Invalid destination entered. Check your input.")
except Exception as e:
print(str(e))
exit(1)
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
identity_path = RNS.Reticulum.identitypath+"/"+APP_NAME
if os.path.isfile(identity_path):
identity = RNS.Identity.from_file(identity_path)
if identity == None:
RNS.log("Could not load identity for rncp. The identity file at \""+str(identity_path)+"\" may be corrupt or unreadable.", RNS.LOG_ERROR)
exit(2)
else:
identity = None
if identity == None:
RNS.log("No valid saved identity found, creating new...", RNS.LOG_INFO)
identity = RNS.Identity()
identity.to_file(identity_path)
if not RNS.Transport.has_path(destination_hash):
RNS.Transport.request_path(destination_hash)
if silent:
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested")
else:
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested ", end=" ")
sys.stdout.flush()
i = 0
syms = "⢄⢂⢁⡁⡈⡐⡠"
estab_timeout = time.time()+timeout
while not RNS.Transport.has_path(destination_hash) and time.time() < estab_timeout:
if not silent:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
if not RNS.Transport.has_path(destination_hash):
if silent:
print("Path not found")
else:
print("\r \rPath not found")
exit(1)
else:
if silent:
print("Establishing link with "+RNS.prettyhexrep(destination_hash))
else:
print("\r \rEstablishing link with "+RNS.prettyhexrep(destination_hash)+" ", end=" ")
listener_identity = RNS.Identity.recall(destination_hash)
listener_destination = RNS.Destination(
listener_identity,
RNS.Destination.OUT,
RNS.Destination.SINGLE,
APP_NAME,
"receive"
)
link = RNS.Link(listener_destination)
while link.status != RNS.Link.ACTIVE and time.time() < estab_timeout:
if not silent:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
if not RNS.Transport.has_path(destination_hash):
if silent:
print("Could not establish link with "+RNS.prettyhexrep(destination_hash))
else:
print("\r \rCould not establish link with "+RNS.prettyhexrep(destination_hash))
exit(1)
else:
if silent:
print("Requesting file from remote...")
else:
print("\r \rRequesting file from remote ", end=" ")
link.identify(identity)
request_resolved = False
request_status = "unknown"
resource_resolved = False
resource_status = "unrequested"
current_resource = None
def request_response(request_receipt):
nonlocal request_resolved, request_status
if request_receipt.response == False:
request_status = "not_found"
elif request_receipt.response == None:
request_status = "remote_error"
else:
request_status = "found"
request_resolved = True
def request_failed(request_receipt):
nonlocal request_resolved, request_status
request_status = "unknown"
request_resolved = True
def fetch_resource_started(resource):
nonlocal resource_status
current_resource = resource
current_resource.progress_callback(sender_progress)
resource_status = "started"
def fetch_resource_concluded(resource):
nonlocal resource_resolved, resource_status
if resource.status == RNS.Resource.COMPLETE:
if resource.total_size > 4:
filename_len = int.from_bytes(resource.data.read(2), "big")
filename = resource.data.read(filename_len).decode("utf-8")
counter = 0
saved_filename = filename
while os.path.isfile(saved_filename):
counter += 1
saved_filename = filename+"."+str(counter)
file = open(saved_filename, "wb")
file.write(resource.data.read())
file.close()
resource_status = "completed"
else:
print("Invalid data received, ignoring resource")
resource_status = "invalid_data"
else:
print("Resource failed")
resource_status = "failed"
resource_resolved = True
link.set_resource_strategy(RNS.Link.ACCEPT_ALL)
link.set_resource_started_callback(fetch_resource_started)
link.set_resource_concluded_callback(fetch_resource_concluded)
link.request("fetch_file", data=file, response_callback=request_response, failed_callback=request_failed)
syms = "⢄⢂⢁⡁⡈⡐⡠"
while not request_resolved:
if not silent:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
if request_status == "not_found":
if not silent: print("\r \r", end="")
print("Fetch request failed, the file "+str(file)+" was not found on the remote")
link.teardown()
time.sleep(1)
exit(0)
elif request_status == "remote_error":
if not silent: print("\r \r", end="")
print("Fetch request failed due to an error on the remote system")
link.teardown()
time.sleep(1)
exit(0)
elif request_status == "unknown":
if not silent: print("\r \r", end="")
print("Fetch request failed due to an unknown error (probably not authorised)")
link.teardown()
time.sleep(1)
exit(0)
elif request_status == "found":
if not silent: print("\r \r", end="")
while not resource_resolved:
if not silent:
time.sleep(0.1)
if current_resource:
prg = current_resource.get_progress()
percent = round(prg * 100.0, 1)
stat_str = str(percent)+"% - " + size_str(int(prg*current_resource.total_size)) + " of " + size_str(current_resource.total_size) + " - " +size_str(speed, "b")+"ps"
print("\r \rTransferring file "+syms[i]+" "+stat_str, end=" ")
else:
print("\r \rWaiting for transfer to start "+syms[i]+" ", end=" ")
sys.stdout.flush()
i = (i+1)%len(syms)
if current_resource.status != RNS.Resource.COMPLETE:
if silent:
print("The transfer failed")
else:
print("\r \rThe transfer failed")
exit(1)
else:
if silent:
print(str(file_path)+" copied to "+RNS.prettyhexrep(destination_hash))
else:
print("\r \r"+str(file)+" fetched from "+RNS.prettyhexrep(destination_hash))
link.teardown()
time.sleep(0.25)
exit(0)
link.teardown()
exit(0)
def send(configdir, verbosity = 0, quietness = 0, destination = None, file = None, timeout = RNS.Transport.PATH_REQUEST_TIMEOUT, silent=False):
global current_resource, resource_done, link, speed
from tempfile import TemporaryFile
targetloglevel = 3+verbosity-quietness
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination) != dest_len:
raise ValueError("Allowed destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
try:
destination_hash = bytes.fromhex(destination)
except Exception as e:
raise ValueError("Invalid destination entered. Check your input.")
except Exception as e:
print(str(e))
exit(1)
file_path = os.path.expanduser(file)
if not os.path.isfile(file_path):
print("File not found")
exit(1)
temp_file = TemporaryFile()
real_file = open(file_path, "rb")
filename_bytes = os.path.basename(file_path).encode("utf-8")
filename_len = len(filename_bytes)
if filename_len > 0xFFFF:
print("Filename exceeds max size, cannot send")
exit(1)
else:
print("Preparing file...", end=" ")
temp_file.write(filename_len.to_bytes(2, "big"))
temp_file.write(filename_bytes)
temp_file.write(real_file.read())
temp_file.seek(0)
print("\r \r", end="")
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
identity_path = RNS.Reticulum.identitypath+"/"+APP_NAME
if os.path.isfile(identity_path):
identity = RNS.Identity.from_file(identity_path)
if identity == None:
RNS.log("Could not load identity for rncp. The identity file at \""+str(identity_path)+"\" may be corrupt or unreadable.", RNS.LOG_ERROR)
exit(2)
else:
identity = None
if identity == None:
RNS.log("No valid saved identity found, creating new...", RNS.LOG_INFO)
identity = RNS.Identity()
identity.to_file(identity_path)
if not RNS.Transport.has_path(destination_hash):
RNS.Transport.request_path(destination_hash)
if silent:
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested")
else:
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested ", end=" ")
sys.stdout.flush()
i = 0
syms = "⢄⢂⢁⡁⡈⡐⡠"
estab_timeout = time.time()+timeout
while not RNS.Transport.has_path(destination_hash) and time.time() < estab_timeout:
if not silent:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
if not RNS.Transport.has_path(destination_hash):
if silent:
print("Path not found")
else:
print("\r \rPath not found")
exit(1)
else:
if silent:
print("Establishing link with "+RNS.prettyhexrep(destination_hash))
else:
print("\r \rEstablishing link with "+RNS.prettyhexrep(destination_hash)+" ", end=" ")
receiver_identity = RNS.Identity.recall(destination_hash)
receiver_destination = RNS.Destination(
receiver_identity,
RNS.Destination.OUT,
RNS.Destination.SINGLE,
APP_NAME,
"receive"
)
link = RNS.Link(receiver_destination)
while link.status != RNS.Link.ACTIVE and time.time() < estab_timeout:
if not silent:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
if time.time() > estab_timeout:
if silent:
print("Link establishment with "+RNS.prettyhexrep(destination_hash)+" timed out")
else:
print("\r \rLink establishment with "+RNS.prettyhexrep(destination_hash)+" timed out")
exit(1)
elif not RNS.Transport.has_path(destination_hash):
if silent:
print("No path found to "+RNS.prettyhexrep(destination_hash))
else:
print("\r \rNo path found to "+RNS.prettyhexrep(destination_hash))
exit(1)
else:
if silent:
print("Advertising file resource...")
else:
print("\r \rAdvertising file resource ", end=" ")
link.identify(identity)
resource = RNS.Resource(temp_file, link, callback = sender_progress, progress_callback = sender_progress)
current_resource = resource
while resource.status < RNS.Resource.TRANSFERRING:
if not silent:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
if resource.status > RNS.Resource.COMPLETE:
if silent:
print("File was not accepted by "+RNS.prettyhexrep(destination_hash))
else:
print("\r \rFile was not accepted by "+RNS.prettyhexrep(destination_hash))
exit(1)
else:
if silent:
print("Transferring file...")
else:
print("\r \rTransferring file ", end=" ")
while not resource_done:
if not silent:
time.sleep(0.1)
prg = current_resource.get_progress()
percent = round(prg * 100.0, 1)
stat_str = str(percent)+"% - " + size_str(int(prg*current_resource.total_size)) + " of " + size_str(current_resource.total_size) + " - " +size_str(speed, "b")+"ps"
print("\r \rTransferring file "+syms[i]+" "+stat_str, end=" ")
sys.stdout.flush()
i = (i+1)%len(syms)
if current_resource.status != RNS.Resource.COMPLETE:
if silent:
print("The transfer failed")
else:
print("\r \rThe transfer failed")
exit(1)
else:
if silent:
print(str(file_path)+" copied to "+RNS.prettyhexrep(destination_hash))
else:
print("\r \r"+str(file_path)+" copied to "+RNS.prettyhexrep(destination_hash))
link.teardown()
time.sleep(0.25)
real_file.close()
temp_file.close()
exit(0)
def main():
try:
parser = argparse.ArgumentParser(description="Reticulum File Transfer Utility")
parser.add_argument("file", nargs="?", default=None, help="file to be transferred", type=str)
parser.add_argument("destination", nargs="?", default=None, help="hexadecimal hash of the receiver", type=str)
parser.add_argument("--config", metavar="path", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
parser.add_argument('-v', '--verbose', action='count', default=0, help="increase verbosity")
parser.add_argument('-q', '--quiet', action='count', default=0, help="decrease verbosity")
parser.add_argument("-S", '--silent', action='store_true', default=False, help="disable transfer progress output")
parser.add_argument("-l", '--listen', action='store_true', default=False, help="listen for incoming transfer requests")
parser.add_argument("-f", '--fetch', action='store_true', default=False, help="fetch file from remote listener instead of sending")
parser.add_argument("-b", action='store', metavar="seconds", default=-1, help="announce interval, 0 to only announce at startup", type=int)
parser.add_argument('-a', metavar="allowed_hash", dest="allowed", action='append', help="accept from this identity", type=str)
parser.add_argument('-n', '--no-auth', action='store_true', default=False, help="accept files from anyone")
parser.add_argument('-p', '--print-identity', action='store_true', default=False, help="print identity and destination info and exit")
parser.add_argument("-w", action="store", metavar="seconds", type=float, help="sender timeout before giving up", default=RNS.Transport.PATH_REQUEST_TIMEOUT)
# parser.add_argument("--limit", action="store", metavar="files", type=float, help="maximum number of files to accept", default=None)
parser.add_argument("--version", action="version", version="rncp {version}".format(version=__version__))
args = parser.parse_args()
if args.listen or args.print_identity:
listen(
configdir = args.config,
verbosity=args.verbose,
quietness=args.quiet,
allowed = args.allowed,
display_identity=args.print_identity,
# limit=args.limit,
disable_auth=args.no_auth,
announce=args.b,
)
elif args.fetch:
if args.destination != None and args.file != None:
fetch(
configdir = args.config,
verbosity = args.verbose,
quietness = args.quiet,
destination = args.destination,
file = args.file,
timeout = args.w,
silent = args.silent,
)
else:
print("")
parser.print_help()
print("")
elif args.destination != None and args.file != None:
send(
configdir = args.config,
verbosity = args.verbose,
quietness = args.quiet,
destination = args.destination,
file = args.file,
timeout = args.w,
silent = args.silent,
)
else:
print("")
parser.print_help()
print("")
except KeyboardInterrupt:
print("")
if resource != None:
resource.cancel()
if link != None:
link.teardown()
exit()
def size_str(num, suffix='B'):
units = ['','K','M','G','T','P','E','Z']
last_unit = 'Y'
if suffix == 'b':
num *= 8
units = ['','K','M','G','T','P','E','Z']
last_unit = 'Y'
for unit in units:
if abs(num) < 1000.0:
if unit == "":
return "%.0f %s%s" % (num, unit, suffix)
else:
return "%.2f %s%s" % (num, unit, suffix)
num /= 1000.0
return "%.2f%s%s" % (num, last_unit, suffix)
if __name__ == "__main__":
main()

600
RNS/Utilities/rnid.py Normal file
View File

@ -0,0 +1,600 @@
#!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2023 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import RNS
import argparse
import time
import sys
import os
import base64
from RNS._version import __version__
APP_NAME = "rnid"
SIG_EXT = "rsg"
ENCRYPT_EXT = "rfe"
CHUNK_SIZE = 16*1024*1024
def spin(until=None, msg=None, timeout=None):
i = 0
syms = "⢄⢂⢁⡁⡈⡐⡠"
if timeout != None:
timeout = time.time()+timeout
print(msg+" ", end=" ")
while (timeout == None or time.time()<timeout) and not until():
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
print("\r"+" "*len(msg)+" \r", end="")
if timeout != None and time.time() > timeout:
return False
else:
return True
def main():
try:
parser = argparse.ArgumentParser(description="Reticulum Identity & Encryption Utility")
# parser.add_argument("file", nargs="?", default=None, help="input file path", type=str)
parser.add_argument("--config", metavar="path", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
parser.add_argument("-i", "--identity", metavar="identity", action="store", default=None, help="hexadecimal Reticulum Destination hash or path to Identity file", type=str)
parser.add_argument("-g", "--generate", metavar="file", action="store", default=None, help="generate a new Identity")
parser.add_argument("-m", "--import", dest="import_str", metavar="identity_data", action="store", default=None, help="import Reticulum identity in hex, base32 or base64 format", type=str)
parser.add_argument("-x", "--export", action="store_true", default=None, help="export identity to hex, base32 or base64 format")
parser.add_argument("-v", "--verbose", action="count", default=0, help="increase verbosity")
parser.add_argument("-q", "--quiet", action="count", default=0, help="decrease verbosity")
parser.add_argument("-a", "--announce", metavar="aspects", action="store", default=None, help="announce a destination based on this Identity")
parser.add_argument("-H", "--hash", metavar="aspects", action="store", default=None, help="show destination hashes for other aspects for this Identity")
parser.add_argument("-e", "--encrypt", metavar="file", action="store", default=None, help="encrypt file")
parser.add_argument("-d", "--decrypt", metavar="file", action="store", default=None, help="decrypt file")
parser.add_argument("-s", "--sign", metavar="path", action="store", default=None, help="sign file")
parser.add_argument("-V", "--validate", metavar="path", action="store", default=None, help="validate signature")
parser.add_argument("-r", "--read", metavar="file", action="store", default=None, help="input file path", type=str)
parser.add_argument("-w", "--write", metavar="file", action="store", default=None, help="output file path", type=str)
parser.add_argument("-f", "--force", action="store_true", default=None, help="write output even if it overwrites existing files")
parser.add_argument("-I", "--stdin", action="store_true", default=False, help=argparse.SUPPRESS) # "read input from STDIN instead of file"
parser.add_argument("-O", "--stdout", action="store_true", default=False, help=argparse.SUPPRESS) # help="write output to STDOUT instead of file",
parser.add_argument("-R", "--request", action="store_true", default=False, help="request unknown Identities from the network")
parser.add_argument("-t", action="store", metavar="seconds", type=float, help="identity request timeout before giving up", default=RNS.Transport.PATH_REQUEST_TIMEOUT)
parser.add_argument("-p", "--print-identity", action="store_true", default=False, help="print identity info and exit")
parser.add_argument("-P", "--print-private", action="store_true", default=False, help="allow displaying private keys")
parser.add_argument("-b", "--base64", action="store_true", default=False, help="Use base64-encoded input and output")
parser.add_argument("-B", "--base32", action="store_true", default=False, help="Use base32-encoded input and output")
parser.add_argument("--version", action="version", version="rnid {version}".format(version=__version__))
args = parser.parse_args()
ops = 0;
for t in [args.encrypt, args.decrypt, args.validate, args.sign]:
if t:
ops += 1
if ops > 1:
RNS.log("This utility currently only supports one of the encrypt, decrypt, sign or verify operations per invocation", RNS.LOG_ERROR)
exit(1)
if not args.read:
if args.encrypt:
args.read = args.encrypt
if args.decrypt:
args.read = args.decrypt
if args.sign:
args.read = args.sign
identity_str = args.identity
if args.import_str:
identity_bytes = None
try:
if args.base64:
identity_bytes = base64.urlsafe_b64decode(args.import_str)
elif args.base32:
identity_bytes = base64.b32decode(args.import_str)
else:
identity_bytes = bytes.fromhex(args.import_str)
except Exception as e:
print("Invalid identity data specified for import: "+str(e))
exit(41)
try:
identity = RNS.Identity.from_bytes(identity_bytes)
except Exception as e:
print("Could not create Reticulum identity from specified data: "+str(e))
exit(42)
RNS.log("Identity imported")
if args.base64:
RNS.log("Public Key : "+base64.urlsafe_b64encode(identity.get_public_key()).decode("utf-8"))
elif args.base32:
RNS.log("Public Key : "+base64.b32encode(identity.get_public_key()).decode("utf-8"))
else:
RNS.log("Public Key : "+RNS.hexrep(identity.get_public_key(), delimit=False))
if identity.prv:
if args.print_private:
if args.base64:
RNS.log("Private Key : "+base64.urlsafe_b64encode(identity.get_private_key()).decode("utf-8"))
elif args.base32:
RNS.log("Private Key : "+base64.b32encode(identity.get_private_key()).decode("utf-8"))
else:
RNS.log("Private Key : "+RNS.hexrep(identity.get_private_key(), delimit=False))
else:
RNS.log("Private Key : Hidden")
if args.write:
try:
wp = os.path.expanduser(args.write)
if not os.path.isfile(wp) or args.force:
identity.to_file(wp)
RNS.log("Wrote imported identity to "+str(args.write))
else:
print("File "+str(wp)+" already exists, not overwriting")
exit(43)
except Exception as e:
print("Error while writing imported identity to file: "+str(e))
exit(44)
exit(0)
if not args.generate and not identity_str:
print("\nNo identity provided, cannot continue\n")
parser.print_help()
print("")
exit(2)
else:
targetloglevel = 4
verbosity = args.verbose
quietness = args.quiet
if verbosity != 0 or quietness != 0:
targetloglevel = targetloglevel+verbosity-quietness
# Start Reticulum
reticulum = RNS.Reticulum(configdir=args.config, loglevel=targetloglevel)
RNS.compact_log_fmt = True
if args.stdout:
RNS.loglevel = -1
if args.generate:
identity = RNS.Identity()
if not args.force and os.path.isfile(args.generate):
RNS.log("Identity file "+str(args.generate)+" already exists. Not overwriting.", RNS.LOG_ERROR)
exit(3)
else:
try:
identity.to_file(args.generate)
RNS.log("New identity written to "+str(args.generate))
exit(0)
except Exception as e:
RNS.log("An error ocurred while saving the generated Identity.", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
exit(4)
identity = None
if len(identity_str) == RNS.Reticulum.TRUNCATED_HASHLENGTH//8*2 and not os.path.isfile(identity_str):
# Try recalling Identity from hex-encoded hash
try:
destination_hash = bytes.fromhex(identity_str)
identity = RNS.Identity.recall(destination_hash)
if identity == None:
if not args.request:
RNS.log("Could not recall Identity for "+RNS.prettyhexrep(destination_hash)+".", RNS.LOG_ERROR)
RNS.log("You can query the network for unknown Identities with the -R option.", RNS.LOG_ERROR)
exit(5)
else:
RNS.Transport.request_path(destination_hash)
def spincheck():
return RNS.Identity.recall(destination_hash) != None
spin(spincheck, "Requesting unknown Identity for "+RNS.prettyhexrep(destination_hash), args.t)
if not spincheck():
RNS.log("Identity request timed out", RNS.LOG_ERROR)
exit(6)
else:
identity = RNS.Identity.recall(destination_hash)
RNS.log("Received Identity "+str(identity)+" for destination "+RNS.prettyhexrep(destination_hash)+" from the network")
else:
RNS.log("Recalled Identity "+str(identity)+" for destination "+RNS.prettyhexrep(destination_hash))
except Exception as e:
RNS.log("Invalid hexadecimal hash provided", RNS.LOG_ERROR)
exit(7)
else:
# Try loading Identity from file
if not os.path.isfile(identity_str):
RNS.log("Specified Identity file not found")
exit(8)
else:
try:
identity = RNS.Identity.from_file(identity_str)
RNS.log("Loaded Identity "+str(identity)+" from "+str(identity_str))
except Exception as e:
RNS.log("Could not decode Identity from specified file")
exit(9)
if identity != None:
if args.hash:
try:
aspects = args.hash.split(".")
if not len(aspects) > 0:
RNS.log("Invalid destination aspects specified", RNS.LOG_ERROR)
exit(32)
else:
app_name = aspects[0]
aspects = aspects[1:]
if identity.pub != None:
destination = RNS.Destination(identity, RNS.Destination.OUT, RNS.Destination.SINGLE, app_name, *aspects)
RNS.log("The "+str(args.hash)+" destination for this Identity is "+RNS.prettyhexrep(destination.hash))
RNS.log("The full destination specifier is "+str(destination))
time.sleep(0.25)
exit(0)
else:
raise KeyError("No public key known")
except Exception as e:
RNS.log("An error ocurred while attempting to send the announce.", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
exit(0)
if args.announce:
try:
aspects = args.announce.split(".")
if not len(aspects) > 1:
RNS.log("Invalid destination aspects specified", RNS.LOG_ERROR)
exit(32)
else:
app_name = aspects[0]
aspects = aspects[1:]
if identity.prv != None:
destination = RNS.Destination(identity, RNS.Destination.IN, RNS.Destination.SINGLE, app_name, *aspects)
RNS.log("Created destination "+str(destination))
RNS.log("Announcing destination "+RNS.prettyhexrep(destination.hash))
destination.announce()
time.sleep(0.25)
exit(0)
else:
destination = RNS.Destination(identity, RNS.Destination.OUT, RNS.Destination.SINGLE, app_name, *aspects)
RNS.log("The "+str(args.announce)+" destination for this Identity is "+RNS.prettyhexrep(destination.hash))
RNS.log("The full destination specifier is "+str(destination))
RNS.log("Cannot announce this destination, since the private key is not held")
time.sleep(0.25)
exit(33)
except Exception as e:
RNS.log("An error ocurred while attempting to send the announce.", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
exit(0)
if args.print_identity:
if args.base64:
RNS.log("Public Key : "+base64.urlsafe_b64encode(identity.get_public_key()).decode("utf-8"))
elif args.base32:
RNS.log("Public Key : "+base64.b32encode(identity.get_public_key()).decode("utf-8"))
else:
RNS.log("Public Key : "+RNS.hexrep(identity.get_public_key(), delimit=False))
if identity.prv:
if args.print_private:
if args.base64:
RNS.log("Private Key : "+base64.urlsafe_b64encode(identity.get_private_key()).decode("utf-8"))
elif args.base32:
RNS.log("Private Key : "+base64.b32encode(identity.get_private_key()).decode("utf-8"))
else:
RNS.log("Private Key : "+RNS.hexrep(identity.get_private_key(), delimit=False))
else:
RNS.log("Private Key : Hidden")
exit(0)
if args.export:
if identity.prv:
if args.base64:
RNS.log("Exported Identity : "+base64.urlsafe_b64encode(identity.get_private_key()).decode("utf-8"))
elif args.base32:
RNS.log("Exported Identity : "+base64.b32encode(identity.get_private_key()).decode("utf-8"))
else:
RNS.log("Exported Identity : "+RNS.hexrep(identity.get_private_key(), delimit=False))
else:
RNS.log("Identity doesn't hold a private key, cannot export")
exit(50)
exit(0)
if args.validate:
if not args.read and args.validate.lower().endswith("."+SIG_EXT):
args.read = str(args.validate).replace("."+SIG_EXT, "")
if not os.path.isfile(args.validate):
RNS.log("Signature file "+str(args.read)+" not found", RNS.LOG_ERROR)
exit(10)
if not os.path.isfile(args.read):
RNS.log("Input file "+str(args.read)+" not found", RNS.LOG_ERROR)
exit(11)
data_input = None
if args.read:
if not os.path.isfile(args.read):
RNS.log("Input file "+str(args.read)+" not found", RNS.LOG_ERROR)
exit(12)
else:
try:
data_input = open(args.read, "rb")
except Exception as e:
RNS.log("Could not open input file for reading", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
exit(13)
# TODO: Actually expand this to a good solution
# probably need to create a wrapper that takes
# into account not closing stdin when done
# elif args.stdin:
# data_input = sys.stdin
data_output = None
if args.encrypt and not args.write and not args.stdout and args.read:
args.write = str(args.read)+"."+ENCRYPT_EXT
if args.decrypt and not args.write and not args.stdout and args.read and args.read.lower().endswith("."+ENCRYPT_EXT):
args.write = str(args.read).replace("."+ENCRYPT_EXT, "")
if args.sign and identity.prv == None:
RNS.log("Specified Identity does not hold a private key. Cannot sign.", RNS.LOG_ERROR)
exit(14)
if args.sign and not args.write and not args.stdout and args.read:
args.write = str(args.read)+"."+SIG_EXT
if args.write:
if not args.force and os.path.isfile(args.write):
RNS.log("Output file "+str(args.write)+" already exists. Not overwriting.", RNS.LOG_ERROR)
exit(15)
else:
try:
data_output = open(args.write, "wb")
except Exception as e:
RNS.log("Could not open output file for writing", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
exit(15)
# TODO: Actually expand this to a good solution
# probably need to create a wrapper that takes
# into account not closing stdout when done
# elif args.stdout:
# data_output = sys.stdout
if args.sign:
if identity.prv == None:
RNS.log("Specified Identity does not hold a private key. Cannot sign.", RNS.LOG_ERROR)
exit(16)
if not data_input:
if not args.stdout:
RNS.log("Signing requested, but no input data specified", RNS.LOG_ERROR)
exit(17)
else:
if not data_output:
if not args.stdout:
RNS.log("Signing requested, but no output specified", RNS.LOG_ERROR)
exit(18)
if not args.stdout:
RNS.log("Signing "+str(args.read))
try:
data_output.write(identity.sign(data_input.read()))
data_output.close()
data_input.close()
if not args.stdout:
if args.read:
RNS.log("File "+str(args.read)+" signed with "+str(identity)+" to "+str(args.write))
exit(0)
except Exception as e:
if not args.stdout:
RNS.log("An error ocurred while encrypting data.", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
try:
data_output.close()
except:
pass
try:
data_input.close()
except:
pass
exit(19)
if args.validate:
if not data_input:
if not args.stdout:
RNS.log("Signature verification requested, but no input data specified", RNS.LOG_ERROR)
exit(20)
else:
# if not args.stdout:
# RNS.log("Verifying "+str(args.validate)+" for "+str(args.read))
try:
try:
sig_input = open(args.validate, "rb")
except Exception as e:
RNS.log("An error ocurred while opening "+str(args.validate)+".", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
exit(21)
validated = identity.validate(sig_input.read(), data_input.read())
sig_input.close()
data_input.close()
if not validated:
if not args.stdout:
RNS.log("Signature "+str(args.validate)+" for file "+str(args.read)+" is invalid", RNS.LOG_ERROR)
exit(22)
else:
if not args.stdout:
RNS.log("Signature "+str(args.validate)+" for file "+str(args.read)+" made by Identity "+str(identity)+" is valid")
exit(0)
except Exception as e:
if not args.stdout:
RNS.log("An error ocurred while validating signature.", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
try:
data_output.close()
except:
pass
try:
data_input.close()
except:
pass
exit(23)
if args.encrypt:
if not data_input:
if not args.stdout:
RNS.log("Encryption requested, but no input data specified", RNS.LOG_ERROR)
exit(24)
else:
if not data_output:
if not args.stdout:
RNS.log("Encryption requested, but no output specified", RNS.LOG_ERROR)
exit(25)
if not args.stdout:
RNS.log("Encrypting "+str(args.read))
try:
more_data = True
while more_data:
chunk = data_input.read(CHUNK_SIZE)
if chunk:
data_output.write(identity.encrypt(chunk))
else:
more_data = False
data_output.close()
data_input.close()
if not args.stdout:
if args.read:
RNS.log("File "+str(args.read)+" encrypted for "+str(identity)+" to "+str(args.write))
exit(0)
except Exception as e:
if not args.stdout:
RNS.log("An error ocurred while encrypting data.", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
try:
data_output.close()
except:
pass
try:
data_input.close()
except:
pass
exit(26)
if args.decrypt:
if identity.prv == None:
RNS.log("Specified Identity does not hold a private key. Cannot decrypt.", RNS.LOG_ERROR)
exit(27)
if not data_input:
if not args.stdout:
RNS.log("Decryption requested, but no input data specified", RNS.LOG_ERROR)
exit(28)
else:
if not data_output:
if not args.stdout:
RNS.log("Decryption requested, but no output specified", RNS.LOG_ERROR)
exit(29)
if not args.stdout:
RNS.log("Decrypting "+str(args.read)+"...")
try:
more_data = True
while more_data:
chunk = data_input.read(CHUNK_SIZE)
if chunk:
plaintext = identity.decrypt(chunk)
if plaintext == None:
if not args.stdout:
RNS.log("Data could not be decrypted with the specified Identity")
exit(30)
else:
data_output.write(plaintext)
else:
more_data = False
data_output.close()
data_input.close()
if not args.stdout:
if args.read:
RNS.log("File "+str(args.read)+" decrypted with "+str(identity)+" to "+str(args.write))
exit(0)
except Exception as e:
if not args.stdout:
RNS.log("An error ocurred while decrypting data.", RNS.LOG_ERROR)
RNS.log("The contained exception was: "+str(e), RNS.LOG_ERROR)
try:
data_output.close()
except:
pass
try:
data_input.close()
except:
pass
exit(31)
if True:
pass
elif False:
pass
else:
print("")
parser.print_help()
print("")
except KeyboardInterrupt:
print("")
exit(255)
if __name__ == "__main__":
main()

74
RNS/Utilities/rnir.py Normal file
View File

@ -0,0 +1,74 @@
#!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2023 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import RNS
import argparse
import time
from RNS._version import __version__
def program_setup(configdir, verbosity = 0, quietness = 0, service = False):
targetverbosity = verbosity-quietness
if service:
targetlogdest = RNS.LOG_FILE
targetverbosity = None
else:
targetlogdest = RNS.LOG_STDOUT
reticulum = RNS.Reticulum(configdir=configdir, verbosity=targetverbosity, logdest=targetlogdest)
exit(0)
def main():
try:
parser = argparse.ArgumentParser(description="Reticulum Distributed Identity Resolver")
parser.add_argument("--config", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
parser.add_argument('-v', '--verbose', action='count', default=0)
parser.add_argument('-q', '--quiet', action='count', default=0)
parser.add_argument("--exampleconfig", action='store_true', default=False, help="print verbose configuration example to stdout and exit")
parser.add_argument("--version", action="version", version="ir {version}".format(version=__version__))
args = parser.parse_args()
if args.exampleconfig:
print(__example_rns_config__)
exit()
if args.config:
configarg = args.config
else:
configarg = None
program_setup(configdir = configarg, verbosity=args.verbose, quietness=args.quiet)
except KeyboardInterrupt:
print("")
exit()
__example_rns_config__ = '''# This is an example Identity Resolver file.
'''
if __name__ == "__main__":
main()

3690
RNS/Utilities/rnodeconf.py Executable file

File diff suppressed because one or more lines are too long

View File

@ -30,7 +30,7 @@ import argparse
from RNS._version import __version__
def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity, timeout, drop_queues):
def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity, timeout, drop_queues, drop_via):
if table:
destination_hash = None
if destination_hexhash != None:
@ -155,6 +155,29 @@ def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity,
sys.exit(1)
elif drop_via:
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination_hexhash) != dest_len:
raise ValueError("Destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
try:
destination_hash = bytes.fromhex(destination_hexhash)
except Exception as e:
raise ValueError("Invalid destination entered. Check your input.")
except Exception as e:
print(str(e))
sys.exit(1)
reticulum = RNS.Reticulum(configdir = configdir, loglevel = 3+verbosity)
if reticulum.drop_all_via(destination_hash):
print("Dropped all paths via "+RNS.prettyhexrep(destination_hash))
else:
print("Unable to drop paths via "+RNS.prettyhexrep(destination_hash)+". Does the transport instance exist?")
sys.exit(1)
else:
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
@ -187,17 +210,22 @@ def program_setup(configdir, table, rates, drop, destination_hexhash, verbosity,
if RNS.Transport.has_path(destination_hash):
hops = RNS.Transport.hops_to(destination_hash)
next_hop = RNS.prettyhexrep(reticulum.get_next_hop(destination_hash))
next_hop_interface = reticulum.get_next_hop_if_name(destination_hash)
if hops != 1:
ms = "s"
next_hop_bytes = reticulum.get_next_hop(destination_hash)
if next_hop_bytes == None:
print("\r \rError: Invalid path data returned")
sys.exit(1)
else:
ms = ""
next_hop = RNS.prettyhexrep(next_hop_bytes)
next_hop_interface = reticulum.get_next_hop_if_name(destination_hash)
print("\rPath found, destination "+RNS.prettyhexrep(destination_hash)+" is "+str(hops)+" hop"+ms+" away via "+next_hop+" on "+next_hop_interface)
if hops != 1:
ms = "s"
else:
ms = ""
print("\rPath found, destination "+RNS.prettyhexrep(destination_hash)+" is "+str(hops)+" hop"+ms+" away via "+next_hop+" on "+next_hop_interface)
else:
print("\r \rPath not found")
print("\r \rPath not found")
sys.exit(1)
@ -251,13 +279,20 @@ def main():
default=False
)
parser.add_argument(
"-x", "--drop-via",
action="store_true",
help="drop all paths via specified transport instance",
default=False
)
parser.add_argument(
"-w",
action="store",
metavar="seconds",
type=float,
help="timeout before giving up",
default=15
default=RNS.Transport.PATH_REQUEST_TIMEOUT
)
parser.add_argument(
@ -277,7 +312,7 @@ def main():
else:
configarg = None
if not args.drop_announces and not args.table and not args.rates and not args.destination:
if not args.drop_announces and not args.table and not args.rates and not args.destination and not args.drop_via:
print("")
parser.print_help()
print("")
@ -291,6 +326,7 @@ def main():
verbosity = args.verbose,
timeout = args.w,
drop_queues = args.drop_announces,
drop_via = args.drop_via,
)
sys.exit(0)
@ -321,7 +357,7 @@ def pretty_date(time=False):
if second_diff < 3600:
return str(int(second_diff / 60)) + " minutes"
if second_diff < 7200:
return "an hour ago"
return "an hour"
if second_diff < 86400:
return str(int(second_diff / 3600)) + " hours"
if day_diff == 1:

View File

@ -31,8 +31,10 @@ import argparse
from RNS._version import __version__
DEFAULT_PROBE_SIZE = 16
DEFAULT_TIMEOUT = 12
def program_setup(configdir, destination_hexhash, size=DEFAULT_PROBE_SIZE, full_name = None, verbosity = 0):
def program_setup(configdir, destination_hexhash, size=None, full_name = None, verbosity = 0, timeout=None, wait=0, probes=1):
if size == None: size = DEFAULT_PROBE_SIZE
if full_name == None:
print("The full destination name including application name aspects must be specified for the destination")
exit()
@ -71,14 +73,19 @@ def program_setup(configdir, destination_hexhash, size=DEFAULT_PROBE_SIZE, full_
print("Path to "+RNS.prettyhexrep(destination_hash)+" requested ", end=" ")
sys.stdout.flush()
_timeout = time.time() + (timeout or DEFAULT_TIMEOUT+reticulum.get_first_hop_timeout(destination_hash))
i = 0
syms = "⢄⢂⢁⡁⡈⡐⡠"
while not RNS.Transport.has_path(destination_hash):
while not RNS.Transport.has_path(destination_hash) and not time.time() > _timeout:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
if time.time() > _timeout:
print("\r \rPath request timed out")
exit(1)
server_identity = RNS.Identity.recall(destination_hash)
request_destination = RNS.Destination(
@ -89,101 +96,120 @@ def program_setup(configdir, destination_hexhash, size=DEFAULT_PROBE_SIZE, full_
*aspects
)
probe = RNS.Packet(request_destination, os.urandom(size))
receipt = probe.send()
sent = 0
replies = 0
while probes:
if more_output:
more = " via "+RNS.prettyhexrep(reticulum.get_next_hop(destination_hash))+" on "+str(reticulum.get_next_hop_if_name(destination_hash))
else:
more = ""
if sent > 0:
time.sleep(wait)
print("\rSent "+str(size)+" byte probe to "+RNS.prettyhexrep(destination_hash)+more+" ", end=" ")
try:
probe = RNS.Packet(request_destination, os.urandom(size))
probe.pack()
except OSError:
print("Error: Probe packet size of "+str(len(probe.raw))+" bytes exceed MTU of "+str(RNS.Reticulum.MTU)+" bytes")
exit(3)
i = 0
while not receipt.status == RNS.PacketReceipt.DELIVERED:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
receipt = probe.send()
sent += 1
print("\b\b ")
sys.stdout.flush()
if more_output:
nhd = reticulum.get_next_hop(destination_hash)
via_str = " via "+RNS.prettyhexrep(nhd) if nhd != None else ""
if_str = " on "+str(reticulum.get_next_hop_if_name(destination_hash)) if reticulum.get_next_hop_if_name(destination_hash) != "None" else ""
more = via_str+if_str
else:
more = ""
hops = RNS.Transport.hops_to(destination_hash)
if hops != 1:
ms = "s"
else:
ms = ""
print("\rSent probe "+str(sent)+" ("+str(size)+" bytes) to "+RNS.prettyhexrep(destination_hash)+more+" ", end=" ")
rtt = receipt.get_rtt()
if (rtt >= 1):
rtt = round(rtt, 3)
rttstring = str(rtt)+" seconds"
else:
rtt = round(rtt*1000, 3)
rttstring = str(rtt)+" milliseconds"
_timeout = time.time() + (timeout or DEFAULT_TIMEOUT+reticulum.get_first_hop_timeout(destination_hash))
i = 0
while receipt.status == RNS.PacketReceipt.SENT and not time.time() > _timeout:
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
reception_stats = ""
if reticulum.is_connected_to_shared_instance:
reception_rssi = reticulum.get_packet_rssi(receipt.proof_packet.packet_hash)
reception_snr = reticulum.get_packet_snr(receipt.proof_packet.packet_hash)
if reception_rssi != None:
reception_stats += " [RSSI "+str(reception_rssi)+" dBm]"
if time.time() > _timeout:
print("\r \rProbe timed out")
if reception_snr != None:
reception_stats += " [SNR "+str(reception_snr)+" dB]"
else:
print("\b\b ")
sys.stdout.flush()
if receipt.status == RNS.PacketReceipt.DELIVERED:
replies += 1
hops = RNS.Transport.hops_to(destination_hash)
if hops != 1:
ms = "s"
else:
ms = ""
rtt = receipt.get_rtt()
if (rtt >= 1):
rtt = round(rtt, 3)
rttstring = str(rtt)+" seconds"
else:
rtt = round(rtt*1000, 3)
rttstring = str(rtt)+" milliseconds"
reception_stats = ""
if reticulum.is_connected_to_shared_instance:
reception_rssi = reticulum.get_packet_rssi(receipt.proof_packet.packet_hash)
reception_snr = reticulum.get_packet_snr(receipt.proof_packet.packet_hash)
reception_q = reticulum.get_packet_q(receipt.proof_packet.packet_hash)
if reception_rssi != None:
reception_stats += " [RSSI "+str(reception_rssi)+" dBm]"
if reception_snr != None:
reception_stats += " [SNR "+str(reception_snr)+" dB]"
if reception_q != None:
reception_stats += " [Link Quality "+str(reception_q)+"%]"
else:
if receipt.proof_packet != None:
if receipt.proof_packet.rssi != None:
reception_stats += " [RSSI "+str(receipt.proof_packet.rssi)+" dBm]"
if receipt.proof_packet.snr != None:
reception_stats += " [SNR "+str(receipt.proof_packet.snr)+" dB]"
print(
"Valid reply from "+
RNS.prettyhexrep(receipt.destination.hash)+
"\nRound-trip time is "+rttstring+
" over "+str(hops)+" hop"+ms+
reception_stats+"\n"
)
else:
print("\r \rProbe timed out")
probes -= 1
loss = round((1-(replies/sent))*100, 2)
print(f"Sent {sent}, received {replies}, packet loss {loss}%")
if loss > 0:
exit(2)
else:
if receipt.proof_packet != None:
if receipt.proof_packet.rssi != None:
reception_stats += " [RSSI "+str(receipt.proof_packet.rssi)+" dBm]"
if receipt.proof_packet.snr != None:
reception_stats += " [SNR "+str(receipt.proof_packet.snr)+" dB]"
print(
"Valid reply received from "+
RNS.prettyhexrep(receipt.destination.hash)+
"\nRound-trip time is "+rttstring+
" over "+str(hops)+" hop"+ms+
reception_stats
)
exit(0)
def main():
try:
parser = argparse.ArgumentParser(description="Reticulum Probe Utility")
parser.add_argument("--config",
action="store",
default=None,
help="path to alternative Reticulum config directory",
type=str
)
parser.add_argument(
"--version",
action="version",
version="rnprobe {version}".format(version=__version__)
)
parser.add_argument(
"full_name",
nargs="?",
default=None,
help="full destination name in dotted notation",
type=str
)
parser.add_argument(
"destination_hash",
nargs="?",
default=None,
help="hexadecimal hash of the destination",
type=str
)
parser.add_argument("--config", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
parser.add_argument("-s", "--size", action="store", default=None, help="size of probe packet payload in bytes", type=int)
parser.add_argument("-n", "--probes", action="store", default=1, help="number of probes to send", type=int)
parser.add_argument("-t", "--timeout", metavar="seconds", action="store", default=None, help="timeout before giving up", type=float)
parser.add_argument("-w", "--wait", metavar="seconds", action="store", default=0, help="time between each probe", type=float)
parser.add_argument("--version", action="version", version="rnprobe {version}".format(version=__version__))
parser.add_argument("full_name", nargs="?", default=None, help="full destination name in dotted notation", type=str)
parser.add_argument("destination_hash", nargs="?", default=None, help="hexadecimal hash of the destination", type=str)
parser.add_argument('-v', '--verbose', action='count', default=0)
@ -202,8 +228,12 @@ def main():
program_setup(
configdir = configarg,
destination_hexhash = args.destination_hash,
size = args.size,
full_name = args.full_name,
verbosity = args.verbose
verbosity = args.verbose,
probes = args.probes,
wait = args.wait,
timeout = args.timeout,
)
except KeyboardInterrupt:

View File

@ -30,15 +30,20 @@ from RNS._version import __version__
def program_setup(configdir, verbosity = 0, quietness = 0, service = False):
targetloglevel = 3+verbosity-quietness
targetverbosity = verbosity-quietness
if service:
RNS.logdest = RNS.LOG_FILE
RNS.logfile = RNS.Reticulum.configdir+"/logfile"
targetloglevel = None
targetlogdest = RNS.LOG_FILE
targetverbosity = None
else:
targetlogdest = RNS.LOG_STDOUT
reticulum = RNS.Reticulum(configdir=configdir, verbosity=targetverbosity, logdest=targetlogdest)
if reticulum.is_connected_to_shared_instance:
RNS.log("Started rnsd version {version} connected to another shared local instance, this is probably NOT what you want!".format(version=__version__), RNS.LOG_WARNING)
else:
RNS.log("Started rnsd version {version}".format(version=__version__), RNS.LOG_NOTICE)
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
RNS.log("Started rnsd version {version}".format(version=__version__), RNS.LOG_NOTICE)
while True:
time.sleep(1)
@ -82,7 +87,7 @@ __example_rns_config__ = '''# This is an example Reticulum config file.
# always-on. This directive is optional and can be removed
# for brevity.
enable_transport = False
enable_transport = No
# By default, the first program to launch the Reticulum
@ -106,6 +111,18 @@ share_instance = Yes
shared_instance_port = 37428
instance_control_port = 37429
# On systems where running instances may not have access
# to the same shared Reticulum configuration directory,
# it is still possible to allow full interactivity for
# running instances, by manually specifying a shared RPC
# key. In almost all cases, this option is not needed, but
# it can be useful on operating systems such as Android.
# The key must be specified as bytes in hexadecimal.
# rpc_key = e5c032d3ec4e64a6aca9927ba8ab73336780f6d71790
# You can configure Reticulum to panic and forcibly close
# if an unrecoverable interface error occurs, such as the
# hardware device for an interface disappearing. This is
@ -115,6 +132,17 @@ instance_control_port = 37429
panic_on_interface_error = No
# When Transport is enabled, it is possible to allow the
# Transport Instance to respond to probe requests from
# the rnprobe utility. This can be a useful tool to test
# connectivity. When this option is enabled, the probe
# destination will be generated from the Identity of the
# Transport Instance, and printed to the log at startup.
# Optional, and disabled by default.
respond_to_probes = No
[logging]
# Valid log levels are 0 through 7:
# 0: Log only critical information
@ -147,7 +175,7 @@ loglevel = 4
[[Default Interface]]
type = AutoInterface
interface_enabled = True
enabled = yes
# The following example enables communication with other
@ -155,7 +183,7 @@ loglevel = 4
[[UDP Interface]]
type = UDPInterface
interface_enabled = False
enabled = no
listen_ip = 0.0.0.0
listen_port = 4242
forward_ip = 255.255.255.255
@ -198,7 +226,7 @@ loglevel = 4
[[TCP Server Interface]]
type = TCPServerInterface
interface_enabled = False
enabled = no
# This configuration will listen on all IP
# interfaces on port 4242
@ -224,7 +252,7 @@ loglevel = 4
[[TCP Client Interface]]
type = TCPClientInterface
interface_enabled = False
enabled = no
target_host = 127.0.0.1
target_port = 4242
@ -237,9 +265,9 @@ loglevel = 4
[[I2P]]
type = I2PInterface
interface_enabled = yes
enabled = no
connectable = yes
peers = 5urvjicpzi7q3ybztsef4i5ow2aq4soktfj7zedz53s47r54jnqq.b32.i2p
peers = ykzlw5ujbaqc2xkec4cpvgyxj257wcrmmgkuxqmqcur7cq3w3lha.b32.i2p
# Here's an example of how to add a LoRa interface
@ -249,7 +277,7 @@ loglevel = 4
type = RNodeInterface
# Enable interface if you want use it!
interface_enabled = False
enabled = no
# Serial port for the device
port = /dev/ttyUSB0
@ -301,7 +329,7 @@ loglevel = 4
type = KISSInterface
# Enable interface if you want use it!
interface_enabled = False
enabled = no
# Serial port for the device
port = /dev/ttyUSB1
@ -372,7 +400,7 @@ loglevel = 4
ssid = 0
# Enable interface if you want use it!
interface_enabled = False
enabled = no
# Serial port for the device
port = /dev/ttyUSB2

View File

@ -46,7 +46,7 @@ def size_str(num, suffix='B'):
return "%.2f%s%s" % (num, last_unit, suffix)
def program_setup(configdir, dispall=False, verbosity = 0):
def program_setup(configdir, dispall=False, verbosity=0, name_filter=None, json=False, astats=False, sorting=None, sort_reverse=False):
reticulum = RNS.Reticulum(configdir = configdir, loglevel = 3+verbosity)
stats = None
@ -56,82 +56,163 @@ def program_setup(configdir, dispall=False, verbosity = 0):
pass
if stats != None:
for ifstat in stats["interfaces"]:
if json:
import json
for s in stats:
if isinstance(stats[s], bytes):
stats[s] = RNS.hexrep(stats[s], delimit=False)
if isinstance(stats[s], dict):
for i in stats[s]:
if isinstance(i, dict):
for k in i:
if isinstance(i[k], bytes):
i[k] = RNS.hexrep(i[k], delimit=False)
print(json.dumps(stats))
exit()
interfaces = stats["interfaces"]
if sorting != None and isinstance(sorting, str):
sorting = sorting.lower()
if sorting == "rate" or sorting == "bitrate":
interfaces.sort(key=lambda i: i["bitrate"], reverse=not sort_reverse)
if sorting == "rx":
interfaces.sort(key=lambda i: i["rxb"], reverse=not sort_reverse)
if sorting == "tx":
interfaces.sort(key=lambda i: i["txb"], reverse=not sort_reverse)
if sorting == "traffic":
interfaces.sort(key=lambda i: i["rxb"]+i["txb"], reverse=not sort_reverse)
if sorting == "announces" or sorting == "announce":
interfaces.sort(key=lambda i: i["incoming_announce_frequency"]+i["outgoing_announce_frequency"], reverse=not sort_reverse)
if sorting == "arx":
interfaces.sort(key=lambda i: i["incoming_announce_frequency"], reverse=not sort_reverse)
if sorting == "atx":
interfaces.sort(key=lambda i: i["outgoing_announce_frequency"], reverse=not sort_reverse)
if sorting == "held":
interfaces.sort(key=lambda i: i["held_announces"], reverse=not sort_reverse)
for ifstat in interfaces:
name = ifstat["name"]
if dispall or not (name.startswith("LocalInterface[") or name.startswith("TCPInterface[Client")):
print("")
if dispall or not (
name.startswith("LocalInterface[") or
name.startswith("TCPInterface[Client") or
name.startswith("I2PInterfacePeer[Connected peer") or
(name.startswith("I2PInterface[") and ("i2p_connectable" in ifstat and ifstat["i2p_connectable"] == False))
):
if ifstat["status"]:
ss = "Up"
else:
ss = "Down"
if not (name.startswith("I2PInterface[") and ("i2p_connectable" in ifstat and ifstat["i2p_connectable"] == False)):
if name_filter == None or name_filter.lower() in name.lower():
print("")
if ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ACCESS_POINT:
modestr = "Access Point"
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_POINT_TO_POINT:
modestr = "Point-to-Point"
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ROAMING:
modestr = "Roaming"
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_BOUNDARY:
modestr = "Boundary"
else:
modestr = "Full"
if ifstat["clients"] != None:
clients = ifstat["clients"]
if name.startswith("Shared Instance["):
cnum = max(clients-1,0)
if cnum == 1:
spec_str = " program"
if ifstat["status"]:
ss = "Up"
else:
spec_str = " programs"
ss = "Down"
clients_string = "Serving : "+str(cnum)+spec_str
else:
clients_string = "Clients : "+str(clients)
if ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ACCESS_POINT:
modestr = "Access Point"
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_POINT_TO_POINT:
modestr = "Point-to-Point"
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_ROAMING:
modestr = "Roaming"
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_BOUNDARY:
modestr = "Boundary"
elif ifstat["mode"] == RNS.Interfaces.Interface.Interface.MODE_GATEWAY:
modestr = "Gateway"
else:
modestr = "Full"
else:
clients = None
print(" {n}".format(n=ifstat["name"]))
if ifstat["clients"] != None:
clients = ifstat["clients"]
if name.startswith("Shared Instance["):
cnum = max(clients-1,0)
if cnum == 1:
spec_str = " program"
else:
spec_str = " programs"
if "ifac_netname" in ifstat and ifstat["ifac_netname"] != None:
print(" Network : {nn}".format(nn=ifstat["ifac_netname"]))
clients_string = "Serving : "+str(cnum)+spec_str
elif name.startswith("I2PInterface["):
if "i2p_connectable" in ifstat and ifstat["i2p_connectable"] == True:
cnum = clients
if cnum == 1:
spec_str = " connected I2P endpoint"
else:
spec_str = " connected I2P endpoints"
print(" Status : {ss}".format(ss=ss))
clients_string = "Peers : "+str(cnum)+spec_str
else:
clients_string = ""
else:
clients_string = "Clients : "+str(clients)
if clients != None:
print(" "+clients_string)
else:
clients = None
if not (name.startswith("Shared Instance[") or name.startswith("TCPInterface[Client") or name.startswith("LocalInterface[")):
print(" Mode : {mode}".format(mode=modestr))
print(" {n}".format(n=ifstat["name"]))
if "bitrate" in ifstat and ifstat["bitrate"] != None:
print(" Rate : {ss}".format(ss=speed_str(ifstat["bitrate"])))
if "peers" in ifstat and ifstat["peers"] != None:
print(" Peers : {np} reachable".format(np=ifstat["peers"]))
if "ifac_netname" in ifstat and ifstat["ifac_netname"] != None:
print(" Network : {nn}".format(nn=ifstat["ifac_netname"]))
if "ifac_signature" in ifstat and ifstat["ifac_signature"] != None:
sigstr = "<…"+RNS.hexrep(ifstat["ifac_signature"][-5:], delimit=False)+">"
print(" Access : {nb}-bit IFAC by {sig}".format(nb=ifstat["ifac_size"]*8, sig=sigstr))
if "i2p_b32" in ifstat and ifstat["i2p_b32"] != None:
print(" I2P B32 : {ep}".format(ep=str(ifstat["i2p_b32"])))
print(" Status : {ss}".format(ss=ss))
if "announce_queue" in ifstat and ifstat["announce_queue"] != None and ifstat["announce_queue"] > 0:
aqn = ifstat["announce_queue"]
if aqn == 1:
print(" Queued : {np} announce".format(np=aqn))
else:
print(" Queued : {np} announces".format(np=aqn))
print(" Traffic : {txb}\n {rxb}".format(rxb=size_str(ifstat["rxb"]), txb=size_str(ifstat["txb"])))
if clients != None and clients_string != "":
print(" "+clients_string)
if not (name.startswith("Shared Instance[") or name.startswith("TCPInterface[Client") or name.startswith("LocalInterface[")):
print(" Mode : {mode}".format(mode=modestr))
if "bitrate" in ifstat and ifstat["bitrate"] != None:
print(" Rate : {ss}".format(ss=speed_str(ifstat["bitrate"])))
if "airtime_short" in ifstat and "airtime_long" in ifstat:
print(" Airtime : {ats}% (15s), {atl}% (1h)".format(ats=str(ifstat["airtime_short"]),atl=str(ifstat["airtime_long"])))
if "channel_load_short" in ifstat and "channel_load_long" in ifstat:
print(" Ch.Load : {ats}% (15s), {atl}% (1h)".format(ats=str(ifstat["channel_load_short"]),atl=str(ifstat["channel_load_long"])))
if "peers" in ifstat and ifstat["peers"] != None:
print(" Peers : {np} reachable".format(np=ifstat["peers"]))
if "tunnelstate" in ifstat and ifstat["tunnelstate"] != None:
print(" I2P : {ts}".format(ts=ifstat["tunnelstate"]))
if "ifac_signature" in ifstat and ifstat["ifac_signature"] != None:
sigstr = "<…"+RNS.hexrep(ifstat["ifac_signature"][-5:], delimit=False)+">"
print(" Access : {nb}-bit IFAC by {sig}".format(nb=ifstat["ifac_size"]*8, sig=sigstr))
if "i2p_b32" in ifstat and ifstat["i2p_b32"] != None:
print(" I2P B32 : {ep}".format(ep=str(ifstat["i2p_b32"])))
if astats and "announce_queue" in ifstat and ifstat["announce_queue"] != None and ifstat["announce_queue"] > 0:
aqn = ifstat["announce_queue"]
if aqn == 1:
print(" Queued : {np} announce".format(np=aqn))
else:
print(" Queued : {np} announces".format(np=aqn))
if astats and "held_announces" in ifstat and ifstat["held_announces"] != None and ifstat["held_announces"] > 0:
aqn = ifstat["held_announces"]
if aqn == 1:
print(" Held : {np} announce".format(np=aqn))
else:
print(" Held : {np} announces".format(np=aqn))
if astats and "incoming_announce_frequency" in ifstat and ifstat["incoming_announce_frequency"] != None:
print(" Announces : {iaf}".format(iaf=RNS.prettyfrequency(ifstat["outgoing_announce_frequency"])))
print(" {iaf}".format(iaf=RNS.prettyfrequency(ifstat["incoming_announce_frequency"])))
print(" Traffic : {txb}\n {rxb}".format(rxb=size_str(ifstat["rxb"]), txb=size_str(ifstat["txb"])))
if "transport_id" in stats and stats["transport_id"] != None:
print("\n Reticulum Transport Instance "+RNS.prettyhexrep(stats["transport_id"])+" running")
print("\n Transport Instance "+RNS.prettyhexrep(stats["transport_id"])+" running")
if "probe_responder" in stats and stats["probe_responder"] != None:
print(" Probe responder at "+RNS.prettyhexrep(stats["probe_responder"])+ " active")
print(" Uptime is "+RNS.prettytime(stats["transport_uptime"]))
print("")
@ -151,8 +232,43 @@ def main():
help="show all interfaces",
default=False
)
parser.add_argument(
"-A",
"--announce-stats",
action="store_true",
help="show announce stats",
default=False
)
parser.add_argument(
"-s",
"--sort",
action="store",
help="sort interfaces by [rate, traffic, rx, tx, announces, arx, atx, held]",
default=None,
type=str
)
parser.add_argument(
"-r",
"--reverse",
action="store_true",
help="reverse sorting",
default=False,
)
parser.add_argument(
"-j",
"--json",
action="store_true",
help="output in JSON format",
default=False
)
parser.add_argument('-v', '--verbose', action='count', default=0)
parser.add_argument("filter", nargs="?", default=None, help="only display interfaces with names including filter", type=str)
args = parser.parse_args()
@ -161,7 +277,16 @@ def main():
else:
configarg = None
program_setup(configdir = configarg, dispall = args.all, verbosity=args.verbose)
program_setup(
configdir = configarg,
dispall = args.all,
verbosity=args.verbose,
name_filter=args.filter,
json=args.json,
astats=args.announce_stats,
sorting=args.sort,
sort_reverse=args.reverse,
)
except KeyboardInterrupt:
print("")

714
RNS/Utilities/rnx.py Normal file
View File

@ -0,0 +1,714 @@
#!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import RNS
import subprocess
import argparse
import shlex
import time
import sys
import tty
import os
from RNS._version import __version__
APP_NAME = "rnx"
identity = None
reticulum = None
allow_all = False
allowed_identity_hashes = []
def prepare_identity(identity_path):
global identity
if identity_path == None:
identity_path = RNS.Reticulum.identitypath+"/"+APP_NAME
if os.path.isfile(identity_path):
identity = RNS.Identity.from_file(identity_path)
if identity == None:
RNS.log("No valid saved identity found, creating new...", RNS.LOG_INFO)
identity = RNS.Identity()
identity.to_file(identity_path)
def listen(configdir, identitypath = None, verbosity = 0, quietness = 0, allowed = [], print_identity = False, disable_auth = None, disable_announce=False):
global identity, allow_all, allowed_identity_hashes, reticulum
targetloglevel = 3+verbosity-quietness
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
prepare_identity(identitypath)
destination = RNS.Destination(identity, RNS.Destination.IN, RNS.Destination.SINGLE, APP_NAME, "execute")
if print_identity:
print("Identity : "+str(identity))
print("Listening on : "+RNS.prettyhexrep(destination.hash))
exit(0)
if disable_auth:
allow_all = True
else:
if allowed != None:
for a in allowed:
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(a) != dest_len:
raise ValueError("Allowed destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
try:
destination_hash = bytes.fromhex(a)
allowed_identity_hashes.append(destination_hash)
except Exception as e:
raise ValueError("Invalid destination entered. Check your input.")
except Exception as e:
print(str(e))
exit(1)
if len(allowed_identity_hashes) < 1 and not disable_auth:
print("Warning: No allowed identities configured, rncx will not accept any commands!")
destination.set_link_established_callback(command_link_established)
if not allow_all:
destination.register_request_handler(
path = "command",
response_generator = execute_received_command,
allow = RNS.Destination.ALLOW_LIST,
allowed_list = allowed_identity_hashes
)
else:
destination.register_request_handler(
path = "command",
response_generator = execute_received_command,
allow = RNS.Destination.ALLOW_ALL,
)
RNS.log("rnx listening for commands on "+RNS.prettyhexrep(destination.hash))
if not disable_announce:
destination.announce()
while True:
time.sleep(1)
def command_link_established(link):
link.set_remote_identified_callback(initiator_identified)
link.set_link_closed_callback(command_link_closed)
RNS.log("Command link "+str(link)+" established")
def command_link_closed(link):
RNS.log("Command link "+str(link)+" closed")
def initiator_identified(link, identity):
global allow_all
RNS.log("Initiator of link "+str(link)+" identified as "+RNS.prettyhexrep(identity.hash))
if not allow_all and not identity.hash in allowed_identity_hashes:
RNS.log("Identity "+RNS.prettyhexrep(identity.hash)+" not allowed, tearing down link")
link.teardown()
def execute_received_command(path, data, request_id, remote_identity, requested_at):
command = data[0].decode("utf-8") # Command to execute
timeout = data[1] # Timeout in seconds
o_limit = data[2] # Size limit for stdout
e_limit = data[3] # Size limit for stderr
stdin = data[4] # Data passed to stdin
if remote_identity != None:
RNS.log("Executing command ["+command+"] for "+RNS.prettyhexrep(remote_identity.hash))
else:
RNS.log("Executing command ["+command+"] for unknown requestor")
result = [
False, # 0: Command was executed
None, # 1: Return value
None, # 2: Stdout
None, # 3: Stderr
None, # 4: Total stdout length
None, # 5: Total stderr length
time.time(), # 6: Started
None, # 7: Concluded
]
try:
process = subprocess.Popen(shlex.split(command), stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result[0] = True
except Exception as e:
result[0] = False
return result
stdout = b""
stderr = b""
timed_out = False
if stdin != None:
process.stdin.write(stdin)
while True:
try:
stdout, stderr = process.communicate(timeout=1)
if process.poll() != None:
break
if len(stdout) > 0:
print(str(stdout))
sys.stdout.flush()
except subprocess.TimeoutExpired:
pass
if timeout != None and time.time() > result[6]+timeout:
RNS.log("Command ["+command+"] timed out and is being killed...")
process.terminate()
process.wait()
if process.poll() != None:
stdout, stderr = process.communicate()
else:
stdout = None
stderr = None
break
if timeout != None and time.time() < result[6]+timeout:
result[7] = time.time()
# Deliver result
result[1] = process.returncode
if o_limit != None and len(stdout) > o_limit:
if o_limit == 0:
result[2] = b""
else:
result[2] = stdout[0:o_limit]
else:
result[2] = stdout
if e_limit != None and len(stderr) > e_limit:
if e_limit == 0:
result[3] = b""
else:
result[3] = stderr[0:e_limit]
else:
result[3] = stderr
result[4] = len(stdout)
result[5] = len(stderr)
if timed_out:
RNS.log("Command timed out")
return result
if remote_identity != None:
RNS.log("Delivering result of command ["+str(command)+"] to "+RNS.prettyhexrep(remote_identity.hash))
else:
RNS.log("Delivering result of command ["+str(command)+"] to unknown requestor")
return result
def spin(until=None, msg=None, timeout=None):
i = 0
syms = "⢄⢂⢁⡁⡈⡐⡠"
if timeout != None:
timeout = time.time()+timeout
print(msg+" ", end=" ")
while (timeout == None or time.time()<timeout) and not until():
time.sleep(0.1)
print(("\b\b"+syms[i]+" "), end="")
sys.stdout.flush()
i = (i+1)%len(syms)
print("\r"+" "*len(msg)+" \r", end="")
if timeout != None and time.time() > timeout:
return False
else:
return True
current_progress = 0.0
stats = []
speed = 0.0
def spin_stat(until=None, timeout=None):
global current_progress, response_transfer_size, speed
i = 0
syms = "⢄⢂⢁⡁⡈⡐⡠"
if timeout != None:
timeout = time.time()+timeout
while (timeout == None or time.time()<timeout) and not until():
time.sleep(0.1)
prg = current_progress
percent = round(prg * 100.0, 1)
stat_str = str(percent)+"% - " + size_str(int(prg*response_transfer_size)) + " of " + size_str(response_transfer_size) + " - " +size_str(speed, "b")+"ps"
print("\r \rReceiving result "+syms[i]+" "+stat_str, end=" ")
sys.stdout.flush()
i = (i+1)%len(syms)
print("\r \r", end="")
if timeout != None and time.time() > timeout:
return False
else:
return True
def remote_execution_done(request_receipt):
pass
def remote_execution_progress(request_receipt):
stats_max = 32
global current_progress, response_transfer_size, speed
current_progress = request_receipt.progress
response_transfer_size = request_receipt.response_transfer_size
now = time.time()
got = current_progress*response_transfer_size
entry = [now, got]
stats.append(entry)
while len(stats) > stats_max:
stats.pop(0)
span = now - stats[0][0]
if span == 0:
speed = 0
else:
diff = got - stats[0][1]
speed = diff/span
link = None
listener_destination = None
remote_exec_grace = 2.0
def execute(configdir, identitypath = None, verbosity = 0, quietness = 0, detailed = False, mirror = False, noid = False, destination = None, command = None, stdin = None, stdoutl = None, stderrl = None, timeout = RNS.Transport.PATH_REQUEST_TIMEOUT, result_timeout = None, interactive = False):
global identity, reticulum, link, listener_destination, remote_exec_grace
try:
dest_len = (RNS.Reticulum.TRUNCATED_HASHLENGTH//8)*2
if len(destination) != dest_len:
raise ValueError("Allowed destination length is invalid, must be {hex} hexadecimal characters ({byte} bytes).".format(hex=dest_len, byte=dest_len//2))
try:
destination_hash = bytes.fromhex(destination)
except Exception as e:
raise ValueError("Invalid destination entered. Check your input.")
except Exception as e:
print(str(e))
exit(241)
if reticulum == None:
targetloglevel = 3+verbosity-quietness
reticulum = RNS.Reticulum(configdir=configdir, loglevel=targetloglevel)
if identity == None:
prepare_identity(identitypath)
if not RNS.Transport.has_path(destination_hash):
RNS.Transport.request_path(destination_hash)
if not spin(until=lambda: RNS.Transport.has_path(destination_hash), msg="Path to "+RNS.prettyhexrep(destination_hash)+" requested", timeout=timeout):
print("Path not found")
exit(242)
if listener_destination == None:
listener_identity = RNS.Identity.recall(destination_hash)
listener_destination = RNS.Destination(
listener_identity,
RNS.Destination.OUT,
RNS.Destination.SINGLE,
APP_NAME,
"execute"
)
if link == None or link.status == RNS.Link.CLOSED or link.status == RNS.Link.PENDING:
link = RNS.Link(listener_destination)
link.did_identify = False
if not spin(until=lambda: link.status == RNS.Link.ACTIVE, msg="Establishing link with "+RNS.prettyhexrep(destination_hash), timeout=timeout):
print("Could not establish link with "+RNS.prettyhexrep(destination_hash))
exit(243)
if not noid and not link.did_identify:
link.identify(identity)
link.did_identify = True
if stdin != None:
stdin = stdin.encode("utf-8")
request_data = [
command.encode("utf-8"), # Command to execute
timeout, # Timeout in seconds
stdoutl, # Size limit for stdout
stderrl, # Size limit for stderr
stdin, # Data passed to stdin
]
# TODO: Tune
rexec_timeout = timeout+link.rtt*4+remote_exec_grace
request_receipt = link.request(
path="command",
data=request_data,
response_callback=remote_execution_done,
failed_callback=remote_execution_done,
progress_callback=remote_execution_progress,
timeout=rexec_timeout
)
spin(
until=lambda:link.status == RNS.Link.CLOSED or (request_receipt.status != RNS.RequestReceipt.FAILED and request_receipt.status != RNS.RequestReceipt.SENT),
msg="Sending execution request",
timeout=rexec_timeout+0.5
)
if link.status == RNS.Link.CLOSED:
print("Could not request remote execution, link was closed")
exit(244)
if request_receipt.status == RNS.RequestReceipt.FAILED:
print("Could not request remote execution")
if interactive:
return
else:
exit(244)
spin(
until=lambda:request_receipt.status != RNS.RequestReceipt.DELIVERED,
msg="Command delivered, awaiting result",
timeout=timeout
)
if request_receipt.status == RNS.RequestReceipt.FAILED:
print("No result was received")
if interactive:
return
else:
exit(245)
spin_stat(
until=lambda:request_receipt.status != RNS.RequestReceipt.RECEIVING,
timeout=result_timeout
)
if request_receipt.status == RNS.RequestReceipt.FAILED:
print("Receiving result failed")
if interactive:
return
else:
exit(246)
if request_receipt.response != None:
try:
executed = request_receipt.response[0]
retval = request_receipt.response[1]
stdout = request_receipt.response[2]
stderr = request_receipt.response[3]
outlen = request_receipt.response[4]
errlen = request_receipt.response[5]
started = request_receipt.response[6]
concluded = request_receipt.response[7]
except Exception as e:
print("Received invalid result")
if interactive:
return
else:
exit(247)
if executed:
if detailed:
if stdout != None and len(stdout) > 0:
print(stdout.decode("utf-8"), end="")
if stderr != None and len(stderr) > 0:
print(stderr.decode("utf-8"), file=sys.stderr, end="")
sys.stdout.flush()
sys.stderr.flush()
print("\n--- End of remote output, rnx done ---")
if started != None and concluded != None:
cmd_duration = round(concluded - started, 3)
print("Remote command execution took "+str(cmd_duration)+" seconds")
total_size = request_receipt.response_size
if request_receipt.request_size != None:
total_size += request_receipt.request_size
transfer_duration = round(request_receipt.response_concluded_at - request_receipt.sent_at - cmd_duration, 3)
if transfer_duration == 1:
tdstr = " in 1 second"
elif transfer_duration < 10:
tdstr = " in "+str(transfer_duration)+" seconds"
else:
tdstr = " in "+pretty_time(transfer_duration)
spdstr = ", effective rate "+size_str(total_size/transfer_duration, "b")+"ps"
print("Transferred "+size_str(total_size)+tdstr+spdstr)
if outlen != None and stdout != None:
if len(stdout) < outlen:
tstr = ", "+str(len(stdout))+" bytes displayed"
else:
tstr = ""
print("Remote wrote "+str(outlen)+" bytes to stdout"+tstr)
if errlen != None and stderr != None:
if len(stderr) < errlen:
tstr = ", "+str(len(stderr))+" bytes displayed"
else:
tstr = ""
print("Remote wrote "+str(errlen)+" bytes to stderr"+tstr)
else:
if stdout != None and len(stdout) > 0:
print(stdout.decode("utf-8"), end="")
if stderr != None and len(stderr) > 0:
print(stderr.decode("utf-8"), file=sys.stderr, end="")
if (stdoutl != 0 and len(stdout) < outlen) or (stderrl != 0 and len(stderr) < errlen):
sys.stdout.flush()
sys.stderr.flush()
print("\nOutput truncated before being returned:")
if len(stdout) != 0 and len(stdout) < outlen:
print(" stdout truncated to "+str(len(stdout))+" bytes")
if len(stderr) != 0 and len(stderr) < errlen:
print(" stderr truncated to "+str(len(stderr))+" bytes")
else:
print("Remote could not execute command")
if interactive:
return
else:
exit(248)
else:
print("No response")
if interactive:
return
else:
exit(249)
try:
if not interactive:
link.teardown()
except Exception as e:
pass
if not interactive and mirror:
if request_receipt.response[1] != None:
exit(request_receipt.response[1])
else:
exit(240)
else:
if interactive:
if mirror:
return request_receipt.response[1]
else:
return None
else:
exit(0)
def main():
try:
parser = argparse.ArgumentParser(description="Reticulum Remote Execution Utility")
parser.add_argument("destination", nargs="?", default=None, help="hexadecimal hash of the listener", type=str)
parser.add_argument("command", nargs="?", default=None, help="command to be execute", type=str)
parser.add_argument("--config", metavar="path", action="store", default=None, help="path to alternative Reticulum config directory", type=str)
parser.add_argument('-v', '--verbose', action='count', default=0, help="increase verbosity")
parser.add_argument('-q', '--quiet', action='count', default=0, help="decrease verbosity")
parser.add_argument('-p', '--print-identity', action='store_true', default=False, help="print identity and destination info and exit")
parser.add_argument("-l", '--listen', action='store_true', default=False, help="listen for incoming commands")
parser.add_argument('-i', metavar="identity", action='store', dest="identity", default=None, help="path to identity to use", type=str)
parser.add_argument("-x", '--interactive', action='store_true', default=False, help="enter interactive mode")
parser.add_argument("-b", '--no-announce', action='store_true', default=False, help="don't announce at program start")
parser.add_argument('-a', metavar="allowed_hash", dest="allowed", action='append', help="accept from this identity", type=str)
parser.add_argument('-n', '--noauth', action='store_true', default=False, help="accept commands from anyone")
parser.add_argument('-N', '--noid', action='store_true', default=False, help="don't identify to listener")
parser.add_argument("-d", '--detailed', action='store_true', default=False, help="show detailed result output")
parser.add_argument("-m", action='store_true', dest="mirror", default=False, help="mirror exit code of remote command")
parser.add_argument("-w", action="store", metavar="seconds", type=float, help="connect and request timeout before giving up", default=RNS.Transport.PATH_REQUEST_TIMEOUT)
parser.add_argument("-W", action="store", metavar="seconds", type=float, help="max result download time", default=None)
parser.add_argument("--stdin", action='store', default=None, help="pass input to stdin", type=str)
parser.add_argument("--stdout", action='store', default=None, help="max size in bytes of returned stdout", type=int)
parser.add_argument("--stderr", action='store', default=None, help="max size in bytes of returned stderr", type=int)
parser.add_argument("--version", action="version", version="rnx {version}".format(version=__version__))
args = parser.parse_args()
if args.listen or args.print_identity:
listen(
configdir = args.config,
identitypath = args.identity,
verbosity=args.verbose,
quietness=args.quiet,
allowed = args.allowed,
print_identity=args.print_identity,
disable_auth=args.noauth,
disable_announce=args.no_announce,
)
elif args.destination != None and args.command != None:
execute(
configdir = args.config,
identitypath = args.identity,
verbosity = args.verbose,
quietness = args.quiet,
detailed = args.detailed,
mirror = args.mirror,
noid = args.noid,
destination = args.destination,
command = args.command,
stdin = args.stdin,
stdoutl = args.stdout,
stderrl = args.stderr,
timeout = args.w,
result_timeout = args.W,
interactive = args.interactive,
)
if args.destination != None and args.interactive:
# command_history_max = 5000
# command_history = []
# command_current = ""
# history_idx = 0
# tty.setcbreak(sys.stdin.fileno())
code = None
while True:
try:
cstr = str(code) if code and code != 0 else ""
prompt = cstr+"> "
print(prompt,end="")
# cmdbuf = b""
# while True:
# ch = sys.stdin.read(1)
# cmdbuf += ch.encode("utf-8")
# print("\r"+prompt+cmdbuf.decode("utf-8"), end="")
command = input()
if command.lower() == "exit" or command.lower() == "quit":
exit(0)
except KeyboardInterrupt:
exit(0)
except EOFError:
exit(0)
if command.lower() == "clear":
print('\033c', end='')
# command_history.append(command)
# while len(command_history) > command_history_max:
# command_history.pop(0)
else:
code = execute(
configdir = args.config,
identitypath = args.identity,
verbosity = args.verbose,
quietness = args.quiet,
detailed = args.detailed,
mirror = args.mirror,
noid = args.noid,
destination = args.destination,
command = command,
stdin = None,
stdoutl = args.stdout,
stderrl = args.stderr,
timeout = args.w,
result_timeout = args.W,
interactive = True,
)
else:
print("")
parser.print_help()
print("")
except KeyboardInterrupt:
# tty.setnocbreak(sys.stdin.fileno())
print("")
if link != None:
link.teardown()
exit()
def size_str(num, suffix='B'):
units = ['','K','M','G','T','P','E','Z']
last_unit = 'Y'
if suffix == 'b':
num *= 8
units = ['','K','M','G','T','P','E','Z']
last_unit = 'Y'
for unit in units:
if abs(num) < 1000.0:
if unit == "":
return "%.0f %s%s" % (num, unit, suffix)
else:
return "%.2f %s%s" % (num, unit, suffix)
num /= 1000.0
return "%.2f%s%s" % (num, last_unit, suffix)
def pretty_time(time, verbose=False):
days = int(time // (24 * 3600))
time = time % (24 * 3600)
hours = int(time // 3600)
time %= 3600
minutes = int(time // 60)
time %= 60
seconds = round(time, 2)
ss = "" if seconds == 1 else "s"
sm = "" if minutes == 1 else "s"
sh = "" if hours == 1 else "s"
sd = "" if days == 1 else "s"
components = []
if days > 0:
components.append(str(days)+" day"+sd if verbose else str(days)+"d")
if hours > 0:
components.append(str(hours)+" hour"+sh if verbose else str(hours)+"h")
if minutes > 0:
components.append(str(minutes)+" minute"+sm if verbose else str(minutes)+"m")
if seconds > 0:
components.append(str(seconds)+" second"+ss if verbose else str(seconds)+"s")
i = 0
tstr = ""
for c in components:
i += 1
if i == 1:
pass
elif i < len(components):
tstr += ", "
elif i == len(components):
tstr += " and "
tstr += c
return tstr
if __name__ == "__main__":
main()

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 Mark Qvist / unsigned.io
# Copyright (c) 2016-2023 Mark Qvist / unsigned.io and contributors
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -32,11 +32,16 @@ from ._version import __version__
from .Reticulum import Reticulum
from .Identity import Identity
from .Link import Link, RequestReceipt
from .Channel import MessageBase
from .Buffer import Buffer, RawChannelReader, RawChannelWriter
from .Transport import Transport
from .Destination import Destination
from .Packet import Packet
from .Packet import PacketReceipt
from .Resolver import Resolver
from .Resource import Resource, ResourceAdvertisement
from .Cryptography import HKDF
from .Cryptography import Hashes
modules = glob.glob(os.path.dirname(__file__)+"/*.py")
__all__ = [ os.path.basename(f)[:-3] for f in modules if not f.endswith('__init__.py')]
@ -55,10 +60,11 @@ LOG_FILE = 0x92
LOG_MAXSIZE = 5*1024*1024
loglevel = LOG_NOTICE
logfile = None
logdest = LOG_STDOUT
logtimefmt = "%Y-%m-%d %H:%M:%S"
loglevel = LOG_NOTICE
logfile = None
logdest = LOG_STDOUT
logtimefmt = "%Y-%m-%d %H:%M:%S"
compact_log_fmt = False
instance_random = random.Random()
instance_random.seed(os.urandom(10))
@ -99,10 +105,14 @@ def timestamp_str(time_s):
return time.strftime(logtimefmt, timestamp)
def log(msg, level=3, _override_destination = False):
global _always_override_destination
global _always_override_destination, compact_log_fmt
msg = str(msg)
if loglevel >= level:
logstring = "["+timestamp_str(time.time())+"] ["+loglevelname(level)+"] "+msg
if not compact_log_fmt:
logstring = "["+timestamp_str(time.time())+"] ["+loglevelname(level)+"] "+msg
else:
logstring = "["+timestamp_str(time.time())+"] "+msg
logging_lock.acquire()
if (logdest == LOG_STDOUT or _always_override_destination or _override_destination):
@ -134,6 +144,12 @@ def rand():
result = instance_random.random()
return result
def trace_exception(e):
import traceback
exception_info = "".join(traceback.TracebackException.from_exception(e).format())
log(f"An unhandled {str(type(e))} exception occurred: {str(e)}", LOG_ERROR)
log(exception_info, LOG_ERROR)
def hexrep(data, delimit=True):
try:
iter(data)
@ -151,9 +167,121 @@ def prettyhexrep(data):
hexrep = "<"+delimiter.join("{:02x}".format(c) for c in data)+">"
return hexrep
def prettyspeed(num, suffix="b"):
return prettysize(num/8, suffix=suffix)+"ps"
def prettysize(num, suffix='B'):
units = ['','K','M','G','T','P','E','Z']
last_unit = 'Y'
if suffix == 'b':
num *= 8
units = ['','K','M','G','T','P','E','Z']
last_unit = 'Y'
for unit in units:
if abs(num) < 1000.0:
if unit == "":
return "%.0f %s%s" % (num, unit, suffix)
else:
return "%.2f %s%s" % (num, unit, suffix)
num /= 1000.0
return "%.2f%s%s" % (num, last_unit, suffix)
def prettyfrequency(hz, suffix="Hz"):
num = hz*1e6
units = ["µ", "m", "", "K","M","G","T","P","E","Z"]
last_unit = "Y"
for unit in units:
if abs(num) < 1000.0:
return "%.2f %s%s" % (num, unit, suffix)
num /= 1000.0
return "%.2f%s%s" % (num, last_unit, suffix)
def prettydistance(m, suffix="m"):
num = m*1e6
units = ["µ", "m", "c", ""]
last_unit = "K"
for unit in units:
divisor = 1000.0
if unit == "m": divisor = 10
if unit == "c": divisor = 100
if abs(num) < divisor:
return "%.2f %s%s" % (num, unit, suffix)
num /= divisor
return "%.2f %s%s" % (num, last_unit, suffix)
def prettytime(time, verbose=False, compact=False):
days = int(time // (24 * 3600))
time = time % (24 * 3600)
hours = int(time // 3600)
time %= 3600
minutes = int(time // 60)
time %= 60
if compact:
seconds = int(time)
else:
seconds = round(time, 2)
ss = "" if seconds == 1 else "s"
sm = "" if minutes == 1 else "s"
sh = "" if hours == 1 else "s"
sd = "" if days == 1 else "s"
displayed = 0
components = []
if days > 0 and ((not compact) or displayed < 2):
components.append(str(days)+" day"+sd if verbose else str(days)+"d")
displayed += 1
if hours > 0 and ((not compact) or displayed < 2):
components.append(str(hours)+" hour"+sh if verbose else str(hours)+"h")
displayed += 1
if minutes > 0 and ((not compact) or displayed < 2):
components.append(str(minutes)+" minute"+sm if verbose else str(minutes)+"m")
displayed += 1
if seconds > 0 and ((not compact) or displayed < 2):
components.append(str(seconds)+" second"+ss if verbose else str(seconds)+"s")
displayed += 1
i = 0
tstr = ""
for c in components:
i += 1
if i == 1:
pass
elif i < len(components):
tstr += ", "
elif i == len(components):
tstr += " and "
tstr += c
if tstr == "":
return "0s"
else:
return tstr
def phyparams():
print("Required Physical Layer MTU : "+str(Reticulum.MTU)+" bytes")
print("Plaintext Packet MDU : "+str(Packet.PLAIN_MDU)+" bytes")
print("Encrypted Packet MDU : "+str(Packet.ENCRYPTED_MDU)+" bytes")
print("Link Curve : "+str(Link.CURVE))
print("Link Packet MDU : "+str(Link.MDU)+" bytes")
print("Link Public Key Size : "+str(Link.ECPUBSIZE*8)+" bits")
print("Link Private Key Size : "+str(Link.KEYSIZE*8)+" bits")
def panic():
os._exit(255)
def exit():
print("")
sys.exit(0)
sys.exit(0)

View File

@ -1 +1 @@
__version__ = "0.3.6"
__version__ = "0.7.5"

View File

@ -25,7 +25,7 @@ async def get_sam_socket(sam_address=sam.DEFAULT_ADDRESS, loop=None):
:param loop: (optional) event loop instance
:return: A (reader, writer) pair
"""
reader, writer = await asyncio.open_connection(*sam_address, loop=loop)
reader, writer = await asyncio.open_connection(*sam_address)
writer.write(sam.hello("3.1", "3.1"))
reply = parse_reply(await reader.readline())
if reply.ok:

View File

@ -85,17 +85,35 @@ class ClientTunnel(I2PTunnel):
"""A coroutine used to run the tunnel"""
await self._pre_run()
self.status = { "setup_ran": False, "setup_failed": False, "exception": None, "connect_tasks": [] }
async def handle_client(client_reader, client_writer):
"""Handle local client connection"""
remote_reader, remote_writer = await aiosam.stream_connect(
self.session_name, self.remote_destination,
sam_address=self.sam_address, loop=self.loop)
asyncio.ensure_future(proxy_data(remote_reader, client_writer),
loop=self.loop)
asyncio.ensure_future(proxy_data(client_reader, remote_writer),
loop=self.loop)
try:
sc_task = aiosam.stream_connect(
self.session_name, self.remote_destination,
sam_address=self.sam_address, loop=self.loop)
self.status["connect_tasks"].append(sc_task)
remote_reader, remote_writer = await sc_task
asyncio.ensure_future(proxy_data(remote_reader, client_writer),
loop=self.loop)
asyncio.ensure_future(proxy_data(client_reader, remote_writer),
loop=self.loop)
self.server = await asyncio.start_server(handle_client, *self.local_address, loop=self.loop)
except Exception as e:
self.status["setup_ran"] = True
self.status["setup_failed"] = True
self.status["exception"] = e
try:
self.server = await asyncio.start_server(handle_client, *self.local_address)
self.status["setup_ran"] = True
except Exception as e:
self.status["setup_ran"] = True
self.status["setup_failed"] = True
self.status["exception"] = e
def stop(self):
super().stop()
@ -117,26 +135,39 @@ class ServerTunnel(I2PTunnel):
"""A coroutine used to run the tunnel"""
await self._pre_run()
self.status = { "setup_ran": False, "setup_failed": False, "exception": None, "connect_tasks": [] }
async def handle_client(incoming, client_reader, client_writer):
# data and dest may come in one chunk
dest, data = incoming.split(b"\n", 1)
remote_destination = sam.Destination(dest.decode())
logger.debug("{} client connected: {}.b32.i2p".format(
self.session_name, remote_destination.base32))
try:
# data and dest may come in one chunk
dest, data = incoming.split(b"\n", 1)
remote_destination = sam.Destination(dest.decode())
logger.debug("{} client connected: {}.b32.i2p".format(
self.session_name, remote_destination.base32))
except Exception as e:
self.status["exception"] = e
self.status["setup_failed"] = True
data = None
try:
remote_reader, remote_writer = await asyncio.wait_for(
sc_task = asyncio.wait_for(
asyncio.open_connection(
host=self.local_address[0],
port=self.local_address[1], loop=self.loop),
timeout=5, loop=self.loop)
port=self.local_address[1]),
timeout=5)
self.status["connect_tasks"].append(sc_task)
remote_reader, remote_writer = await sc_task
if data: remote_writer.write(data)
asyncio.ensure_future(proxy_data(remote_reader, client_writer),
loop=self.loop)
asyncio.ensure_future(proxy_data(client_reader, remote_writer),
loop=self.loop)
except ConnectionRefusedError:
client_writer.close()
self.status["exception"] = e
self.status["setup_failed"] = True
async def server_loop():
try:
@ -151,6 +182,7 @@ class ServerTunnel(I2PTunnel):
pass
self.server_loop = asyncio.ensure_future(server_loop(), loop=self.loop)
self.status["setup_ran"] = True
def stop(self):
super().stop()

33
RNS/vendor/ifaddr/__init__.py vendored Normal file
View File

@ -0,0 +1,33 @@
# Copyright (c) 2014 Stefan C. Mueller
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import os
from RNS.vendor.ifaddr._shared import Adapter, IP
if os.name == "nt":
from RNS.vendor.ifaddr._win32 import get_adapters
elif os.name == "posix":
from RNS.vendor.ifaddr._posix import get_adapters
else:
raise RuntimeError("Unsupported Operating System: %s" % os.name)
__all__ = ['Adapter', 'IP', 'get_adapters']

93
RNS/vendor/ifaddr/_posix.py vendored Normal file
View File

@ -0,0 +1,93 @@
# Copyright (c) 2014 Stefan C. Mueller
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import os
import ctypes.util
import ipaddress
import collections
import socket
from typing import Iterable, Optional
import RNS.vendor.ifaddr._shared as shared
class ifaddrs(ctypes.Structure):
pass
ifaddrs._fields_ = [
('ifa_next', ctypes.POINTER(ifaddrs)),
('ifa_name', ctypes.c_char_p),
('ifa_flags', ctypes.c_uint),
('ifa_addr', ctypes.POINTER(shared.sockaddr)),
('ifa_netmask', ctypes.POINTER(shared.sockaddr)),
]
libc = ctypes.CDLL(ctypes.util.find_library("socket" if os.uname()[0] == "SunOS" else "c"), use_errno=True) # type: ignore
def get_adapters(include_unconfigured: bool = False) -> Iterable[shared.Adapter]:
addr0 = addr = ctypes.POINTER(ifaddrs)()
retval = libc.getifaddrs(ctypes.byref(addr))
if retval != 0:
eno = ctypes.get_errno()
raise OSError(eno, os.strerror(eno))
ips = collections.OrderedDict()
def add_ip(adapter_name: str, ip: Optional[shared.IP]) -> None:
if adapter_name not in ips:
index = None # type: Optional[int]
try:
# Mypy errors on this when the Windows CI runs:
# error: Module has no attribute "if_nametoindex"
index = socket.if_nametoindex(adapter_name) # type: ignore
except (OSError, AttributeError):
pass
ips[adapter_name] = shared.Adapter(adapter_name, adapter_name, [], index=index)
if ip is not None:
ips[adapter_name].ips.append(ip)
while addr:
name = addr[0].ifa_name.decode(encoding='UTF-8')
ip_addr = shared.sockaddr_to_ip(addr[0].ifa_addr)
if ip_addr:
if addr[0].ifa_netmask and not addr[0].ifa_netmask[0].sa_familiy:
addr[0].ifa_netmask[0].sa_familiy = addr[0].ifa_addr[0].sa_familiy
netmask = shared.sockaddr_to_ip(addr[0].ifa_netmask)
if isinstance(netmask, tuple):
netmaskStr = str(netmask[0])
prefixlen = shared.ipv6_prefixlength(ipaddress.IPv6Address(netmaskStr))
else:
assert netmask is not None, f'sockaddr_to_ip({addr[0].ifa_netmask}) returned None'
netmaskStr = str('0.0.0.0/' + netmask)
prefixlen = ipaddress.IPv4Network(netmaskStr).prefixlen
ip = shared.IP(ip_addr, prefixlen, name)
add_ip(name, ip)
else:
if include_unconfigured:
add_ip(name, None)
addr = addr[0].ifa_next
libc.freeifaddrs(addr0)
return ips.values()

198
RNS/vendor/ifaddr/_shared.py vendored Normal file
View File

@ -0,0 +1,198 @@
# Copyright (c) 2014 Stefan C. Mueller
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import ctypes
import socket
import ipaddress
import platform
from typing import List, Optional, Tuple, Union
class Adapter(object):
"""
Represents a network interface device controller (NIC), such as a
network card. An adapter can have multiple IPs.
On Linux aliasing (multiple IPs per physical NIC) is implemented
by creating 'virtual' adapters, each represented by an instance
of this class. Each of those 'virtual' adapters can have both
a IPv4 and an IPv6 IP address.
"""
def __init__(self, name: str, nice_name: str, ips: List['IP'], index: Optional[int] = None) -> None:
#: Unique name that identifies the adapter in the system.
#: On Linux this is of the form of `eth0` or `eth0:1`, on
#: Windows it is a UUID in string representation, such as
#: `{846EE342-7039-11DE-9D20-806E6F6E6963}`.
self.name = name
#: Human readable name of the adpater. On Linux this
#: is currently the same as :attr:`name`. On Windows
#: this is the name of the device.
self.nice_name = nice_name
#: List of :class:`ifaddr.IP` instances in the order they were
#: reported by the system.
self.ips = ips
#: Adapter index as used by some API (e.g. IPv6 multicast group join).
self.index = index
def __repr__(self) -> str:
return "Adapter(name={name}, nice_name={nice_name}, ips={ips}, index={index})".format(
name=repr(self.name), nice_name=repr(self.nice_name), ips=repr(self.ips), index=repr(self.index)
)
# Type of an IPv4 address (a string in "xxx.xxx.xxx.xxx" format)
_IPv4Address = str
# Type of an IPv6 address (a three-tuple `(ip, flowinfo, scope_id)`)
_IPv6Address = Tuple[str, int, int]
class IP(object):
"""
Represents an IP address of an adapter.
"""
def __init__(self, ip: Union[_IPv4Address, _IPv6Address], network_prefix: int, nice_name: str) -> None:
#: IP address. For IPv4 addresses this is a string in
#: "xxx.xxx.xxx.xxx" format. For IPv6 addresses this
#: is a three-tuple `(ip, flowinfo, scope_id)`, where
#: `ip` is a string in the usual collon separated
#: hex format.
self.ip = ip
#: Number of bits of the IP that represent the
#: network. For a `255.255.255.0` netmask, this
#: number would be `24`.
self.network_prefix = network_prefix
#: Human readable name for this IP.
#: On Linux is this currently the same as the adapter name.
#: On Windows this is the name of the network connection
#: as configured in the system control panel.
self.nice_name = nice_name
@property
def is_IPv4(self) -> bool:
"""
Returns `True` if this IP is an IPv4 address and `False`
if it is an IPv6 address.
"""
return not isinstance(self.ip, tuple)
@property
def is_IPv6(self) -> bool:
"""
Returns `True` if this IP is an IPv6 address and `False`
if it is an IPv4 address.
"""
return isinstance(self.ip, tuple)
def __repr__(self) -> str:
return "IP(ip={ip}, network_prefix={network_prefix}, nice_name={nice_name})".format(
ip=repr(self.ip), network_prefix=repr(self.network_prefix), nice_name=repr(self.nice_name)
)
if platform.system() == "Darwin" or "BSD" in platform.system():
# BSD derived systems use marginally different structures
# than either Linux or Windows.
# I still keep it in `shared` since we can use
# both structures equally.
class sockaddr(ctypes.Structure):
_fields_ = [
('sa_len', ctypes.c_uint8),
('sa_familiy', ctypes.c_uint8),
('sa_data', ctypes.c_uint8 * 14),
]
class sockaddr_in(ctypes.Structure):
_fields_ = [
('sa_len', ctypes.c_uint8),
('sa_familiy', ctypes.c_uint8),
('sin_port', ctypes.c_uint16),
('sin_addr', ctypes.c_uint8 * 4),
('sin_zero', ctypes.c_uint8 * 8),
]
class sockaddr_in6(ctypes.Structure):
_fields_ = [
('sa_len', ctypes.c_uint8),
('sa_familiy', ctypes.c_uint8),
('sin6_port', ctypes.c_uint16),
('sin6_flowinfo', ctypes.c_uint32),
('sin6_addr', ctypes.c_uint8 * 16),
('sin6_scope_id', ctypes.c_uint32),
]
else:
class sockaddr(ctypes.Structure): # type: ignore
_fields_ = [('sa_familiy', ctypes.c_uint16), ('sa_data', ctypes.c_uint8 * 14)]
class sockaddr_in(ctypes.Structure): # type: ignore
_fields_ = [
('sin_familiy', ctypes.c_uint16),
('sin_port', ctypes.c_uint16),
('sin_addr', ctypes.c_uint8 * 4),
('sin_zero', ctypes.c_uint8 * 8),
]
class sockaddr_in6(ctypes.Structure): # type: ignore
_fields_ = [
('sin6_familiy', ctypes.c_uint16),
('sin6_port', ctypes.c_uint16),
('sin6_flowinfo', ctypes.c_uint32),
('sin6_addr', ctypes.c_uint8 * 16),
('sin6_scope_id', ctypes.c_uint32),
]
def sockaddr_to_ip(sockaddr_ptr: 'ctypes.pointer[sockaddr]') -> Optional[Union[_IPv4Address, _IPv6Address]]:
if sockaddr_ptr:
if sockaddr_ptr[0].sa_familiy == socket.AF_INET:
ipv4 = ctypes.cast(sockaddr_ptr, ctypes.POINTER(sockaddr_in))
ippacked = bytes(bytearray(ipv4[0].sin_addr))
ip = str(ipaddress.ip_address(ippacked))
return ip
elif sockaddr_ptr[0].sa_familiy == socket.AF_INET6:
ipv6 = ctypes.cast(sockaddr_ptr, ctypes.POINTER(sockaddr_in6))
flowinfo = ipv6[0].sin6_flowinfo
ippacked = bytes(bytearray(ipv6[0].sin6_addr))
ip = str(ipaddress.ip_address(ippacked))
scope_id = ipv6[0].sin6_scope_id
return (ip, flowinfo, scope_id)
return None
def ipv6_prefixlength(address: ipaddress.IPv6Address) -> int:
prefix_length = 0
for i in range(address.max_prefixlen):
if int(address) >> i & 1:
prefix_length = prefix_length + 1
return prefix_length

145
RNS/vendor/ifaddr/_win32.py vendored Normal file
View File

@ -0,0 +1,145 @@
# Copyright (c) 2014 Stefan C. Mueller
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import ctypes
from ctypes import wintypes
from typing import Iterable, List
import RNS.vendor.ifaddr._shared as shared
NO_ERROR = 0
ERROR_BUFFER_OVERFLOW = 111
MAX_ADAPTER_NAME_LENGTH = 256
MAX_ADAPTER_DESCRIPTION_LENGTH = 128
MAX_ADAPTER_ADDRESS_LENGTH = 8
AF_UNSPEC = 0
class SOCKET_ADDRESS(ctypes.Structure):
_fields_ = [('lpSockaddr', ctypes.POINTER(shared.sockaddr)), ('iSockaddrLength', wintypes.INT)]
class IP_ADAPTER_UNICAST_ADDRESS(ctypes.Structure):
pass
IP_ADAPTER_UNICAST_ADDRESS._fields_ = [
('Length', wintypes.ULONG),
('Flags', wintypes.DWORD),
('Next', ctypes.POINTER(IP_ADAPTER_UNICAST_ADDRESS)),
('Address', SOCKET_ADDRESS),
('PrefixOrigin', ctypes.c_uint),
('SuffixOrigin', ctypes.c_uint),
('DadState', ctypes.c_uint),
('ValidLifetime', wintypes.ULONG),
('PreferredLifetime', wintypes.ULONG),
('LeaseLifetime', wintypes.ULONG),
('OnLinkPrefixLength', ctypes.c_uint8),
]
class IP_ADAPTER_ADDRESSES(ctypes.Structure):
pass
IP_ADAPTER_ADDRESSES._fields_ = [
('Length', wintypes.ULONG),
('IfIndex', wintypes.DWORD),
('Next', ctypes.POINTER(IP_ADAPTER_ADDRESSES)),
('AdapterName', ctypes.c_char_p),
('FirstUnicastAddress', ctypes.POINTER(IP_ADAPTER_UNICAST_ADDRESS)),
('FirstAnycastAddress', ctypes.c_void_p),
('FirstMulticastAddress', ctypes.c_void_p),
('FirstDnsServerAddress', ctypes.c_void_p),
('DnsSuffix', ctypes.c_wchar_p),
('Description', ctypes.c_wchar_p),
('FriendlyName', ctypes.c_wchar_p),
]
iphlpapi = ctypes.windll.LoadLibrary("Iphlpapi") # type: ignore
def enumerate_interfaces_of_adapter(
nice_name: str, address: IP_ADAPTER_UNICAST_ADDRESS
) -> Iterable[shared.IP]:
# Iterate through linked list and fill list
addresses = [] # type: List[IP_ADAPTER_UNICAST_ADDRESS]
while True:
addresses.append(address)
if not address.Next:
break
address = address.Next[0]
for address in addresses:
ip = shared.sockaddr_to_ip(address.Address.lpSockaddr)
assert ip is not None, f'sockaddr_to_ip({address.Address.lpSockaddr}) returned None'
network_prefix = address.OnLinkPrefixLength
yield shared.IP(ip, network_prefix, nice_name)
def get_adapters(include_unconfigured: bool = False) -> Iterable[shared.Adapter]:
# Call GetAdaptersAddresses() with error and buffer size handling
addressbuffersize = wintypes.ULONG(15 * 1024)
retval = ERROR_BUFFER_OVERFLOW
while retval == ERROR_BUFFER_OVERFLOW:
addressbuffer = ctypes.create_string_buffer(addressbuffersize.value)
retval = iphlpapi.GetAdaptersAddresses(
wintypes.ULONG(AF_UNSPEC),
wintypes.ULONG(0),
None,
ctypes.byref(addressbuffer),
ctypes.byref(addressbuffersize),
)
if retval != NO_ERROR:
raise ctypes.WinError() # type: ignore
# Iterate through adapters fill array
address_infos = [] # type: List[IP_ADAPTER_ADDRESSES]
address_info = IP_ADAPTER_ADDRESSES.from_buffer(addressbuffer)
while True:
address_infos.append(address_info)
if not address_info.Next:
break
address_info = address_info.Next[0]
# Iterate through unicast addresses
result = [] # type: List[shared.Adapter]
for adapter_info in address_infos:
# We don't expect non-ascii characters here, so encoding shouldn't matter
name = adapter_info.AdapterName.decode()
nice_name = adapter_info.Description
index = adapter_info.IfIndex
if adapter_info.FirstUnicastAddress:
ips = enumerate_interfaces_of_adapter(
adapter_info.FriendlyName, adapter_info.FirstUnicastAddress[0]
)
ips = list(ips)
result.append(shared.Adapter(name, nice_name, ips, index=index))
elif include_unconfigured:
result.append(shared.Adapter(name, nice_name, [], index=index))
return result

57
RNS/vendor/ifaddr/niwrapper.py vendored Normal file
View File

@ -0,0 +1,57 @@
import ipaddress
import RNS.vendor.ifaddr
import socket
from typing import List
AF_INET6 = socket.AF_INET6.value
AF_INET = socket.AF_INET.value
def interfaces() -> List[str]:
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
return [a.name for a in adapters]
def interface_names_to_indexes() -> dict:
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
results = {}
for adapter in adapters:
results[adapter.name] = adapter.index
return results
def interface_name_to_nice_name(ifname) -> str:
try:
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
for adapter in adapters:
if adapter.name == ifname:
if hasattr(adapter, "nice_name"):
return adapter.nice_name
except:
return None
return None
def ifaddresses(ifname) -> dict:
adapters = RNS.vendor.ifaddr.get_adapters(include_unconfigured=True)
ifa = {}
for a in adapters:
if a.name == ifname:
ipv4s = []
ipv6s = []
for ip in a.ips:
t = {}
if ip.is_IPv4:
net = ipaddress.ip_network(str(ip.ip)+"/"+str(ip.network_prefix), strict=False)
t["addr"] = ip.ip
t["prefix"] = ip.network_prefix
t["broadcast"] = str(net.broadcast_address)
ipv4s.append(t)
if ip.is_IPv6:
t["addr"] = ip.ip[0]
ipv6s.append(t)
if len(ipv4s) > 0:
ifa[AF_INET] = ipv4s
if len(ipv6s) > 0:
ifa[AF_INET6] = ipv6s
return ifa

0
RNS/vendor/ifaddr/py.typed vendored Normal file
View File

View File

@ -8,6 +8,12 @@ def get_platform():
import sys
return sys.platform
def is_linux():
if get_platform() == "linux":
return True
else:
return False
def is_darwin():
if get_platform() == "darwin":
return True
@ -42,4 +48,4 @@ def cryptography_old_api():
if cryptography.__version__ == "2.8":
return True
else:
return False
return False

112
Roadmap.md Normal file
View File

@ -0,0 +1,112 @@
# Reticulum Development Roadmap
This document outlines the currently established development roadmap for Reticulum.
1. [Currently Active Work Areas](#currently-active-work-areas)
2. [Primary Efforts](#primary-efforts)
- [Comprehensibility](#comprehensibility)
- [Universality](#universality)
- [Functionality](#functionality)
- [Usability & Utility](#usability--utility)
- [Interfaceability](#interfaceability)
3. [Auxillary Efforts](#auxillary-efforts)
4. [Release History](#release-history)
## Currently Active Work Areas
For each release cycle of Reticulum, improvements and additions from the five [Primary Efforts](#primary-efforts) are selected as active work areas, and can be expected to be included in the upcoming releases within that cycle. While not entirely set in stone for each release cycle, they serve as a pointer of what to expect in the near future.
- The current `0.7.x` release cycle aims at completing
- [ ] Overhauling and updating the documentation
- [ ] Distributed Destination Naming System
- [ ] Create a standalone RNS Daemon app for Android
- [ ] Network-wide path balancing
- [ ] Add automatic retries to all use cases of the `Request` API
- [ ] Performance and memory optimisations of the Python reference implementation
- [ ] Fixing bugs discovered while operating Reticulum systems and applications
## Primary Efforts
The development path for Reticulum is currently laid out in five distinct areas: *Comprehensibility*, *Universality*, *Functionality*, *Usability & Utility* and *Interfaceability*. Conceptualising the development of Reticulum into these areas serves to advance the implementation and work towards the Foundational Goals & Values of Reticulum.
### Comprehensibility
These efforts are aimed at improving the ease of which Reticulum is understood, and lowering the barrier to entry for people who wish to start building systems on Reticulum.
- Improving [the manual](https://markqvist.github.io/Reticulum/manual/) with tutorials specifically for beginners
- Updating the documentation to reflect recent changes and improvements
- Update descriptions of protocol mechanics
- Update announce description
- Add in-depth explanation of the IFAC system
- Software
- Update Sideband screenshots
- Update Sideband description
- Update NomadNet screenshots
- Update Sideband screenshots
- Installation
- [x] Add a *Reticulum On Raspberry Pi* section
- [x] Update *Reticulum On Android* section if necessary
- [x] Update Android install documentation.
- Communications hardware section
- Add information about RNode external displays.
- [x] Packet radio modems.
- Possibly add other relevant types here as well.
- Setup *Best Practices For...* / *Installation Examples* section.
- Home or office (example)
- Vehicles (example)
- No-grid/solar/remote sites (example)
### Universality
These efforts seek to broaden the universality of the Reticulum software and hardware ecosystem by continously diversifying platform support, and by improving the overall availability and ease of deployment of the Reticulum stack.
- OpenWRT support
- Create a standalone RNS Daemon app for Android
- A lightweight and portable C implementation for microcontrollers, µRNS
- A portable, high-performance Reticulum implementation in C/C++, see [#21](https://github.com/markqvist/Reticulum/discussions/21)
- Performance and memory optimisations of the Python implementation
- Bindings for other programming languages
### Functionality
These efforts aim to expand and improve the core functionality and reliability of Reticulum.
- Add automatic retries to all use cases of the `Request` API
- Network-wide path balancing
- Distributed Destination Naming System
- Globally routable multicast
- Destination proxying
- [Metric-based path selection and multiple paths](https://github.com/markqvist/Reticulum/discussions/86)
### Usability & Utility
These effors seek to make Reticulum easier to use and operate, and to expand the utility of the stack on deployed systems.
- Easy way to share interface configurations, see [#19](https://github.com/markqvist/Reticulum/discussions/19)
- Transit traffic display in rnstatus
- rnsconfig utility
### Interfaceability
These efforts aim to expand the types of physical and virtual interfaces that Reticulum can natively use to transport data.
- Filesystem interface
- Plain ESP32 devices (ESP-Now, WiFi, Bluetooth, etc.)
- More LoRa transceivers
- AT-compatible modems
- Direct SDR Support
- Optical mediums
- IR Transceivers
- AWDL / OWL
- HF Modems
- GNU Radio
- CAN-bus
- Raw SPI
- Raw i²c
- MQTT
- XBee
- Tor
## Auxillary Efforts
The Reticulum ecosystem is enriched by several other software and hardware projects, and the support and improvement of these, in symbiosis with the core Reticulum project helps expand the reach and utility of Reticulum itself.
This section lists, in no particular order, various important efforts that would be beneficial to the goals of Reticulum.
- The [RNode](https://unsigned.io/rnode/) project
- [ ] Create a WebUSB-based bootstrapping utility, and integrate this directly into the [RNode Bootstrap Console](#), both on-device, and on an Internet-reachable copy. This will make it much easier to create new RNodes for average users.
## Release History
Please see the [Changelog](./Changelog.md) for a complete release history and changelog of Reticulum.

View File

@ -30,3 +30,8 @@ help:
cp -r build/latex/reticulumnetworkstack.pdf ./Reticulum\ Manual.pdf; \
echo "PDF Manual Generated"; \
fi
@if [ $@ = "epub" ]; then \
cp -r build/epub/ReticulumNetworkStack.epub ./Reticulum\ Manual.epub; \
echo "EPUB Manual Generated"; \
fi

BIN
docs/Reticulum Manual.epub Normal file

Binary file not shown.

Binary file not shown.

View File

@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: cb185315f69a86407f1dca0f5d7d25ef
config: e6cf914f5d96347d4f64d2e8bcefb841
tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 259 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 562 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 236 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Some files were not shown because too many files have changed in this diff Show More