moonraker: add initial source

Signed-off-by:  Eric Callahan <arksine.code@gmail.com>
This commit is contained in:
Arksine 2020-07-01 21:21:35 -04:00
parent 8779bd74e2
commit d1c740b900
25 changed files with 5948 additions and 2 deletions

View File

@ -1,2 +1,9 @@
# moonraker
Web API Server for Klipper
# Moonraker - API Web Server for Klipper
Moonraker is a Python 3 based web server that exposes APIs with which
client applications may use interact with Klipper. Communcation between
the Klippy host and Moonraker is done over a Unix Domain Socket.
Moonraker depends on Tornado for its server functionality. Moonraker
does not come bundled with a client, you will need to install one,
such as [Mainsail](https://github.com/meteyou/mainsail).

294
docs/dev_changelog.md Normal file
View File

@ -0,0 +1,294 @@
### Moonraker Version .08-alpha - 7/2/2020
- Moonraker has moved to its own repo.
- Python 3 support has been added.
- API Key management has moved from Klippy to Moonraker
- File Management has moved from Klippy to Moonraker. All static files are now
located in the the `/server/files` root path:
- klippy.log - `/server/files/klippy.log`
- moonraker.log - `/server/files/moonraker.log`
- gcode files - `/server/files/gcodes/(.*)`
Note that the new file manager will be capable of serving and listing files
in directories aside from "gcodes".
- Added basic plugin support
- Added metadata support for SuperSlicer
- Added thumbnail extraction from SuperSlicer and PrusaSlicer gcode files
- For status requests, `virtual_sdcard.current_file` has been renamed to
`virtual_sdcard.filename`
- Clients should not send `M112` via gcode to execute an emegency shutdown.
They should instead use the new API which exposes this functionality.
- New APIs:
- `POST /printer/emergency_stop` - `post_printer_emergency_stop`
- `GET /server/files/metadata` - `get_metadata`
- `GET /server/files/directory`
- `POST /server/files/directory`
- `DELETE /server/files/directory`
- The following API changes have been made:
| Previous URI | New URI | Previous JSON_RPC method | New JSON_RPC method |
|--------------|---------|--------------------------| --------------------|
| GET /printer/objects | GET /printer/objects/list | get_printer_objects | get_printer_objects_list |
| GET /printer/subscriptions | GET /printer/objects/subscription | get_printer_subscriptions | get_printer_objects_subscription |
| POST /printer/subscriptions | POST /printer/objects/subscription | post_printer_subscriptions | post_printer_objects_subscription |
| GET /printer/status | GET /printer/objects/status | get_printer_status | get_printer_objects_status |
| POST /printer/gcode | POST /printer/gcode/script | post_printer_gcode | post_printer_gcode_script |
| GET /printer/klippy.log | GET /server/files/klippy.log | | |
| GET /server/moonraker.log | GET /server/files/moonraker.log | | |
| GET /printer/files | GET /server/files/list | get_printer_files | get_file_list |
| POST /printer/files/upload | POST /server/files/upload | | |
| GET /printer/files/<filename> | GET /server/files/gcodes/<filename> | | |
| DELETE /printer/files/<filename> | DELETE /server/files/<filename> | | |
| GET /printer/endstops | GET /printer/query_endstops/status | get_printer_endstops | get_printer_query_endstops_status |
### Moonraker Version .07-alpha - 5/7/2020
- The server process is no longer managed directly by Klippy. It has moved
into its own process dubbed Moonraker. Please see README.md for
installation instructions.
- API Changes:
- `/printer/temperature_store` is now `/server/temperature_store`, or
`get_server_temperature_store` via the websocket
- `/printer/log` is now `/printer/klippy.log`
- `/server/moonraker.log` has been added to fetch the server's log file
- Klippy Changes:
- The remote_api directory has been removed. There is now a single
remote_api.py module that handles server configuration.
- webhooks.py has been changed to handle communications with the server
- klippy.py has been changed to pass itself to webhooks
- file_manager.py has been changed to specifiy the correct status code
when an error is generated attempting to upload or delete a file
- The nginx configuration will need the following additional section:
```
location /server {
proxy_pass http://apiserver/server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
}
```
### Version .06-alpha - 5/4/2020
- Add `/machine/reboot` and `/machine/shutdown` endpoints. These may be used
to reboot or shutdown the host machine
- Fix issue where websocket was blocked on long transactions, resulting in the
connection being closed
- Log all client requests over the websocket
- Add `/printer/temperature_store` endpoint. Clients may use this to fetch
stored temperature data. By default the store for each temperature sensor
is updated every 1s, with the store holding 20 minutes of data.
### Version .05-alpha - 04/23/2020
- The `[web_server]` module has been renamed to `[remote_api]`. Please update
printer.cfg accordingly
- Static files no longer served by the API server. As a result, there is
no `web_path` option in `[remote_api]`.
- The server process now now forwards logging requests back to the Klippy
Host, thus all logging is done in klippy.log. The temporary endpoint serving
klippy_server.log has been removed.
- `/printer/info` now includes two additional keys:
- `error_detected` - Boolean value set to true if a host error has been
detected
- `message` - The current Klippy State message. If an error is detected this
message may be presented to the user. This is the same message returned
when by the STATUS gcode.
- The server process is now launched immediately after the config file is read.
This allows the client limited access to Klippy in the event of a startup
error, assuming the config file was successfully parsed and the
`remote_api` configuration section is valid. Note that when the server is
initally launched not all endpoints will be available. The following
endponts are guaranteed when the server is launched:
- `/websocket`
- `/printer/info`
- `/printer/restart`
- `/printer/firmware_restart`
- `/printer/log`
- `/printer/gcode`
- `/access/api_key`
- `/access/oneshot_token`
The following startup sequence is recommened for clients which make use of
the websocket:
- Attempt to connect to `/websocket` until successful
- Once connected, query `/printer/info` for the ready status. If not ready
check `error_detected`. If not ready and no error, continue querying on
a timer until the printer is either ready or an error is detected.
- After the printer has identified itself as ready make subscription requests,
get the current file list, etc
- If the websocket disconnects the client can assume that the server is shutdown.
It should consider the printer's state to be NOT ready and try reconnecting to
the websocket until successful.
### Version .04-alpha - 04/20/2020
- Add `/printer/gcode/help` endpoint to gcode.py
- Allow the clients to fetch .json files in the root web directory
- Add support for detailed print tracking to virtual_sdcard.py. This
includes filament usage and print time tracking
- Add new file_manager.py module for advanced gcode file management. Gcode
files may exist in subdirectories. This module also supports extracting
metadata from gcode files.
- Clean up API registration. All endpoints are now registered by Klippy
host modules outside of static files and `/api/version`, which is used for
compatibility with Octoprint's legacy file upload API.
- The server now runs in its own process. Communication between the Host and
the server is done over a duplex pipe. Currently this results in a second
log file being generated specifically for the server at
`/tmp/klippy_server.log`. This is likely a temporary solution, and as such
a temporary endpoint has been added at `/printer/klippy_server.log`. Users
can use the browser to download the log by navigating to
`http://<host>/printer/klippy_server.log`.
### Version .03-alpha - 03/09/2020
- Require that the configured port be above 1024.
- Fix hard crash if the webserver fails to start.
- Fix file uploads with names containing whitespace
- Serve static files based on their relative directory, ie a request
for "/js/main.js" will now look for the files in "<web_path>/js/main.js".
- Fix bug in CORS where DELETE requests raised an exception
- Disable the server when running Klippy in batch mode
- The the `/printer/cancel`, `/printer/pause` and `/printer/resume` gcodes
are now registed by the pause_resume module. This results in the following
changes:
- The `cancel_gcode`, `pause_gcode`, and `resume_gcode` options have
been removed from the [web_server] section.
- The `/printer/pause` and `/printer/resume` endpoints will run the "PAUSE"
and "RESUME" gcodes respectively. These gcodes can be overridden by a
gcode_macro to run custom PAUSE and RESUME commands. For example:
```
[gcode_macro PAUSE]
rename_existing: BASE_PAUSE
gcode:
{% if not printer.pause_resume.is_paused %}
M600
{% endif %}
[gcode_macro M600]
default_parameter_X: 50
default_parameter_Y: 0
default_parameter_Z: 10
gcode:
SET_IDLE_TIMEOUT TIMEOUT=18000
{% if not printer.pause_resume.is_paused %}
BASE_PAUSE
{% endif %}
G1 E-.8 F2700
G91
G1 Z{Z}
G90
G1 X{X} Y{Y} F3000
```
If you are calling "PAUSE" in any other macro of config section, please
remember that it will execute the macro. If that is not your intention,
change "PAUSE" in those sections to the renamed version, in the example
above it is BASE_PAUSE.
- The cancel endpoint runs a "CANCEL_PRINT" gcode. Users will need to
define their own gcode macro for this
- Remove "notify_paused_state_changed" and "notify_printer_state_changed"
events. The data from these events can be fetched via status
subscriptions.
- "idle_timeout" and "pause_resume" now default to tier 1 status updates,
which sets their default refresh time is 250ms.
- Some additional status attributes have been added to virtual_sdcard.py. At
the moment they are experimental and subject to change:
- 'is_active' - returns true when the virtual_sdcard is processing. Note
that this will return false when the printer is paused
- 'current_file' - The name of the currently loaded file. If no file is
loaded returns an empty string.
- 'print_duration' - The approximate duration (in seconds) of the current
print. This value does not include time spent paused. Returns 0 when
no file is loaded.
- 'total_duration' - The total duration of the current print, including time
spent paused. This can be useful for approximating the local time the
print started Returns 0 when no file is loaded.
- 'filament_used' - The approximate amount of filament used. This does not
include changes to flow rate. Returns 0 when no file is loaded.
- 'file_position' - The current position (in bytes) of the loaded file
Returns 0 when no file is loaded.
- 'progress' - This attribute already exists, however it has been changed
to retain its value while the print is paused. Previously it would reset
to 0 when paused. Returns 0 when no file is loaded.
### Version .02-alpha - 02/27/2020
- Migrated Framework and Server from Bottle/Eventlet to Tornado. This
resolves an issue where the server hangs for a period of time if the
network connection abruptly drops.
- A `webhooks` host module has been created. Other modules can use this
the webhooks to register endpoints, even if the web_server is not
configured.
- Two modules have been renamed, subscription_handler.py is now
status_handler.py and ws_handler.py is now ws_manager.py. These names
more accurately reflect their current functionality.
- Tornado Websockets support string encoded frames. Thus it is no longer
necessary for clients to use a FileReader object to convert incoming
websocket data from a Blob into a String.
- The endpoint for querying endstops has changed from `GET
/printer/extras/endstops` to `GET /printer/endstops`
- Serveral API changes have been made to accomodate the addition of webhooks:
- `GET /printer/klippy_info` is now `GET /printer/info`. This endpoint no
longer returns host information, as that can be retreived direct via the
`location` object in javascript. Instead it returns CPU information.
- `GET /printer/objects` is no longer used to accomodate multiple request
types by modifying the "Accept" headers. Each request has been broken
down in their their own endpoints:
- `GET /printer/objects` returns all available printer objects that may
be queried
- `GET /printer/status?gcode=gcode_position,speed&toolhead` returns the
status of the printer objects and attribtues
- `GET /printer/subscriptions` returns all printer objects that are current
being subscribed to along with their poll times
- `POST /printer/subscriptions?gcode&toolhead` requests that the printer
add the specified objects and attributes to the list of subscribed objects
- Requests that query the Klippy host with additional parameters can no
longer use variable paths. For example, `POST /printer/gcode/<gcode>` is no
longer valid. Parameters must be added to the query string. This currently
affects two endpoints:
- `POST /printer/gcode/<gcode>` is now `POST /printer/gcode?script=<gcode>`
- `POST printer/print/start/<filename>` is now
`POST /printer/print/start?filename=<filename>`
- The websocket API also required changes to accomodate dynamically registered
endpoints. Each method name is now generated from its comparable HTTP
request. The new method names are listed below:
| new method | old method |
|------------|------------|
| get_printer_files | get_file_list |
| get_printer_info | get_klippy_info |
| get_printer_objects | get_object_info |
| get_printer_subscriptions | get_subscribed |
| get_printer_status | get_status |
| post_printer_subscriptions | add_subscription |
| post_printer_gcode | run_gcode |
| post_printer_print_start | start_print |
| post_printer_print_pause | pause_print |
| post_printer_print_resume | resume_print |
| post_printer_print_cancel | cancel_print |
| post_printer_restart | restart |
| post_printer_firmware_restart | firmware_restart |
| get_printer_endstops | get_endstops |
- As with the http API, a change was necessary to the way arguments are send
along with the request. Webocket requests should now send "keyword
arguments" rather than "variable arguments". The test client has been
updated to reflect these changes, see main.js and json-rpc.js, specifically
the new method `call_method_with_kwargs`. For status requests this simply
means that it is no longer necessary to wrap the Object in an Array. The
gcode and start print requests now look for named parameters, ie:
- gcode requests - `{jsonrpc: "2.0", method: "post_printer_gcode",
params: {script: "M117 FooBar"}, id: <request id>}`
- start print - `{jsonrpc: "2.0", method: "post_printer_print_start",
params: {filename: "my_file.gcode"}, id:<request id>}`
### Version .01-alpha - 02/14/2020
- The api.py module has been refactored to contain the bottle application and
all routes within a class. Bottle is now imported and patched dynamically
within this class's constructor. This resolves an issue where the "request"
context was lost when the Klippy host restarts.
- Change the Websocket API to use the JSON-RPC 2.0 protocol. See the test
client (main.js and json-rpc.js) for an example client side implementation.
- Remove file transfer support from the websocket. Use the HTTP for all file
transfer requests.
- Add support for Klippy Host modules to register their own urls.
Query_endstops.py has been updated with an example. As a result of this
change, the endpoint for endstop query has been changed to
`/printer/extras/endstops`.
- Add support for "paused", "resumed", and "cleared" pause events.
- Add routes for downloading klippy.log, restart, and firmware_restart.
- Remove support for trailing slashes in HTTP API routes.
- Support "start print after upload" requests
- Add support for user configured request timeouts
- The test client has been updated to work with the new changes

209
docs/installation.md Normal file
View File

@ -0,0 +1,209 @@
## Installation
This document provides a guide on how to install Moonraker on a Raspberry
Pi running Raspian/Rasperry Pi OS. Other SBCs and/or linux distributions
may work, however they may need a custom install script.
Klipper should be installed prior to installing Moonraker. Please see
[Klipper's Documention](https://github.com/KevinOConnor/klipper/blob/master/docs/Installation.md)
for instructions on how to do this.
Moonraker is still in alpha development, and thus some of its dependencies
in Klipper have yet to be merged. Until this has been done it will be
necessary to add a remote and work off a developmental branch of Klipper
to correctly run Moonraker.
```
cd ~/klipper
git remote add arksine https://github.com/Arksine/klipper.git
```
Now fetch and checkout:
```
git fetch arksine
git checkout arksine/dev-moonraker-testing
```
Note that you are now in a detached head state and you cannot pull. Any
time you want to update to the latest version of this branch you must
repeat the two commands above.
For reference, if you want to switch back to the clone of the official repo:
```
git checkout master
```
Note that the above command is NOT part of the Moonraker install procedure.
You can now install the Moonraker application:
```
cd ~
git clone https://github.com/Arksine/moonraker.git
```
If you have an older version of moonraker installed, it must be removed:
```
cd ~/moonraker/scripts
./uninstall_moonraker.sh
```
Finally, run moonraker's install script:
```
cd ~/moonraker/scripts
./install_moonraker.sh
```
When the script completes it should start both Moonraker and Klipper. In
`klippy.log` you should find the following entry:\
`Moonraker: server connection detected`
Currently Moonraker is responsible for creating the Unix Domain Socket,
so so it must be started first for Klippy to connect. In any instance
where Klipper was started first simply restart the klipper service.
```
sudo service klipper restart
```
After the connection is established Klippy will register API endpoints and
send configuration to the server. Once the initial configuration is sent
to Moonraker its configuration will be retained when Klippy disconnects
(either through a restart or by stopping the service), and updated when
Klippy reconnects.
# Configuration
The host, port, log file location, socket file location and api key file
are all specified via command arguments:
```
usage: moonraker.py [-h] [-a <address>] [-p <port>] [-s <socketfile>]
[-l <logfile>] [-k <apikeyfile>]
Moonraker - Klipper API Server
optional arguments:
-h, --help show this help message and exit
-a <address>, --address <address>
host name or ip to bind to the Web Server
-p <port>, --port <port>
port the Web Server will listen on
-s <socketfile>, --socketfile <socketfile>
file name and location for the Unix Domain Socket
-l <logfile>, --logfile <logfile>
log file name and location
-k <apikeyfile>, --apikey <apikeyfile>
API Key file location
```
The default configuration is:
- address = 0.0.0.0 (Bind to all interfaces)
- port = 7125
- socketfile = /tmp/moonraker
- logfile = /tmp/moonraker.log
- apikeyfile = ~/.moonraker_api_key
It is recommended to use the defaults, however one may change these
arguments by editing `/etc/default/moonraker`.
All other configuration is sent to the server via Klippy, thus it is done in
printer.cfg. A basic configuration that authorizes clients on a range from
192.168.1.1 - 192.168.1.254 is as follows:
```
[moonraker]
trusted_clients:
192.168.1.0/24
```
Below is a detailed explanation of all options currently available:
```
#[moonraker]
#require_auth: True
# Enables Authorization. When set to true, only trusted clients and
# requests with an API key are accepted.
#enable_cors: False
# Enables CORS support. If serving static files from a different http
# server then CORS will need to be enabled.
#trusted_clients:
# A list of new line separated ip addresses, or ip ranges, that are trusted.
# Trusted clients are given full access to the API. Note that ranges must
# be expressed in 24-bit CIDR notation, where the last segment is zero:
# 192.168.1.0/24
# The above example will allow 192.168.1.1 - 192.168.1-254. Note attempting
# to use a non-zero value for the last IP segement or different bit value will
# result in a configuration error.
#request_timeout: 5.
# The amount of time (in seconds) a client request has to process before the
# server returns an error. This timeout does NOT apply to gcode requests.
# Default is 5 seconds.
#long_running_gcodes:
# BED_MESH_CALIBRATE, 120.
# M104, 200.
# A list of gcodes that will be assigned their own timeout. The list should
# be in the format presented above, where the first item is the gcode name
# and the second item is the timeout (in seconds). Each pair should be
# separated by a newline. The default is an empty list where no gcodes have
# a unique timeout.
#long_running_requests:
# gcode/script, 60.
# pause_resume/pause, 60.
# pause_resume/resume, 60.
# pause_resume/cancel, 60.
# A list of requests that will be assigned their own timeout. The list
# should be formatted in the same manner as long_running_gcodes. The
# default is matches the example shown above.
#status_tier_1:
# toolhead
# gcode
#status_tier_2:
# fan
#status_tier_3:
# extruder
# virtual_sdcard
# Subscription Configuration. By default items in tier 1 are polled every
# 250 ms, tier 2 every 500 ms, tier 3 every 1s, tier 4 every 2s, tier
# 5 every 4s, tier 6 every 8s.
#tick_time: .25
# This is the base interval used for status tier 1. All other status tiers
# are calculated using the value defined by tick_time (See below for more
# information). Default is 250ms.
```
The "status tiers" are used to determine how fast each klippy object is allowed
to be polled. Each tier is calculated using the `tick_time` option. There are
6 tiers, `tier_1 = tick_time` (.25s), `tier_2 = tick_time*2` (.5s),
`tier_3 = tick_time*4` (1s), `tier_4 = tick_time*8` (2s),
`tier_5 = tick_time*16` (4s), and `tier_6 = tick_time*16` (8s). This method
was chosen to provide some flexibility for slower hosts while making it easy to
batch subscription updates together.
## Plugin Configuration
The core plugins are configured via the primary configuration above. Optional
plugins each need their own configuration. Currently the only optional plugin
available is the `paneldue` plugin, which can be configured as follows:
```
[moonraker_plugin paneldue]
serial: /dev/ttyAMA0
baud: 57600
machine_name: Voron 2
macros:
LOAD_FILAMENT
UNLOAD_FILAMENT
PREHEAT_CHAMBER
TURN_OFF_MOTORS
TURN_OFF_HEATERS
PANELDUE_BEEP FREQUENCY=500 DURATION=1
```
Most options above are self explanatory. The "macros" option can be used
to specify commands (either built in or gcode_macros) that will show up
in the PanelDue's "macro" menu.
Note that buzzing the piezo requires the following gcode_macro:
```
[gcode_macro PANELDUE_BEEP]
# Beep frequency
default_parameter_FREQUENCY: 300
# Beep duration in seconds
default_parameter_DURATION: 1.
gcode:
{ printer.moonraker.action_call_remote_method(
"paneldue_beep", frequency=FREQUENCY|int,
duration=DURATION|float) }
```

3
docs/plugins.md Normal file
View File

@ -0,0 +1,3 @@
## Plugins
Documentation Forthcoming

654
docs/web_api.md Normal file
View File

@ -0,0 +1,654 @@
# API
Most API methods are supported over both the Websocket and HTTP transports.
File Transfer and "/access" requests are only available over HTTP. The
Websocket is required to receive printer generated events such as gcode
responses. For information on how to set up the Websocket, please see the
Appendix at the end of this document.
Note that all HTTP responses are returned as a json encoded object in the form
of:
`{result: <response data>}`
The command matches the original command request, the result is the return
value generated from the request.
Websocket requests are returned in JSON-RPC format:
`{jsonrpc: "2.0", "result": <response data>, id: <request id>}`
HTML requests will recieve a 500 status code on error, accompanied by
the specific error message.
Websocket requests that result in an error will receive a properly formatted
JSON-RPC response:
`{jsonrpc: "2.0", "error": {code: <code>, message: <msg>}, id: <request_id>}`
Note that under some circumstances it may not be possible for the server to
return a request ID, such as an improperly formatted json request.
The `test\client` folder includes a basic test interface with example usage for
most of the requests below. It also includes a basic JSON-RPC implementation
that uses promises to return responses and errors (see json-rcp.js).
## Printer Administration
### Get Klippy host information:
- HTTP command:\
`GET /printer/info`
- Websocket command:\
`{jsonrpc: "2.0", method: "get_printer_info", id: <request id>}`
- Returns:\
An object containing the build version, cpu info, and if the Klippy
process is ready for operation. The latter is useful when a client connects
after the klippy state event has been broadcast.
`{version: "<version>", cpu: "<cpu_info>", is_ready: <boolean>,
hostname: "<hostname>", error_detected: <boolean>,
message: "<current state message>"}`
### Emergency Stop
- HTTP command:\
`POST /printer/emergency_stop`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_emergency_stop", id: <request id>}`
- Returns:\
`ok`
### Restart the host
- HTTP command:\
`POST /printer/restart`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_restart", id: <request id>}`
- Returns:\
`ok`
### Restart the firmware (restarts the host and all connected MCUs)
- HTTP command:\
`POST /printer/firmware_restart`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_firmware_restart", id: <request id>}`
- Returns:\
`ok`
## Printer Status
### Request available printer objects and their attributes:
- HTTP command:\
`GET /printer/objects/list`
- Websocket command:\
`{jsonrpc: "2.0", method: "get_printer_objects_list", id: <request id>}`
- Returns:\
An object containing key, value pairs, where the key is the name of the
Klippy module available for status query, and the value is an array of
strings containing that module's available attributes.
```json
{ gcode: ["busy", "gcode_position", ...],
toolhead: ["position", "status"...], ...}
```
### Request currently subscribed objects:
- HTTP command:
`GET /printer/objects/subscription`
- Websocket command:\
`{jsonrpc: "2.0", method: "get_printer_objects_subscription", id: <request id>}`
- Returns:\
An object of the similar that above, however the format of the `result`
value is changed to include poll times:
```json
{ objects: {
gcode: ["busy", "gcode_position", ...],
toolhead: ["position", "status"...],
...},
poll_times: {
gcode: .25,
toolhead: .25,
...}
}
```
### Request the a status update for an object, or group of objects:
- HTTP command:\
`GET /printer/objects/status?gcode`
The above will fetch a status update for all gcode attributes. The query
string can contain multiple items, and specify individual attributes:
`?gcode=gcode_position,busy&toolhead&extruder=target`
- Websocket command:\
`{jsonrpc: "2.0", method: "get_printer_objects_status", params:
{gcode: [], toolhead: ["position", "status"]}, id: <request id>}`
Note that an empty array will fetch all available attributes for its key.
- Returns:\
An object where the top level keys are the requested Klippy objects, as shown
below:
```json
{ gcode: {
busy: true,
gcode_position: [0, 0, 0 ,0],
...},
toolhead: {
position: [0, 0, 0, 0],
status: "Ready",
...},
...}
```
### Subscribe to a status request or a batch of status requests:
- HTTP command:\
`POST /printer/objects/subscription?gcode=gcode_position,bus&extruder=target`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_objects_subscription", params:
{gcode: [], toolhead: ["position", "status"]}, id: <request id>}`
- Returns:\
An acknowledgement that the request has been received:
`ok`
The actual status updates will be sent asynchronously over the websocket.
### Query Endstops
- HTTP command:\
`GET /printer/query_endstops/status`
- Websocket command:\
`{jsonrpc: "2.0", method: "get_printer_query_endstops_status", id: <request id>}`
- Returns:\
An object containing the current endstop state, with each attribute in the
format of `endstop:<state>`, where "state" can be "open" or "TRIGGERED", for
example:
```json
{x: "TRIGGERED",
y: "open",
z: "open"}
```
### Fetch stored temperature data
- HTTP command:\
`GET /server/temperature_store`
- Websocket command:
`{jsonrpc: "2.0", method: "get_temperature_store", id: <request id>}`
- Returns:\
An object where the keys are the available temperature sensor names, and with
the value being an array of stored temperatures. The array is updated every
1 second by default, containing a total of 1200 values (20 minutes). The
array is organized from oldest temperature to most recent (left to right).
Note that when the host starts each array is initialized to 0s.
## Gcode Controls
### Run a gcode:
- HTTP command:\
`POST /printer/gcode/script?script=<gc>`
For example,\
`POST /printer/gcode/script?script=RESPOND MSG=Hello`\
Will echo "Hello" to the terminal.
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_gcode_script",
params: {script: <gc>}, id: <request id>}`
- Returns:\
An acknowledgement that the gcode has completed execution:
`ok`
### Get GCode Help
- HTTP command:\
`GET /printer/gcode/help`
- Websocket command:\
`{jsonrpc: "2.0", method: "get_printer_gcode_help",
params: {script: <gc>}, id: <request id>}`
- Returns:\
An object where they keys are gcode handlers and values are the associated
help strings. Note that help strings are not available for basic gcode
handlers such as G1, G28, etc.
## Print Management
### Print a file
- HTTP command:\
`POST /printer/print/start?filename=<file name>`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_print_start",
params: {filename: <file name>, id:<request id>}`
- Returns:\
`ok` on success
### Pause a print
- HTTP command:\
`POST /printer/print/pause`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_print_pause", id: <request id>}`
- Returns:\
`ok`
### Resume a print
- HTTP command:\
`POST /printer/print/resume`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_print_resume", id: <request id>}`
- Returns:\
`ok`
### Cancel a print
- HTTP command:\
`POST /printer/print/cancel`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_printer_print_cancel", id: <request id>}`
- Returns:\
`ok`
## Machine Commands
### Shutdown the Operating System
- HTTP command:\
`POST /machine/shutdown`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_machine_shutdown", id: <request id>}`
- Returns:\
No return value as the server will shut down upon execution
### Reboot the Operating System
- HTTP command:\
`POST /machine/reboot`
- Websocket command:\
`{jsonrpc: "2.0", method: "post_machine_reboot", id: <request id>}`
- Returns:\
No return value as the server will shut down upon execution
## File Operations
While all file transfer operations are available via the HTTP API, only
"get_file_list" and "get_metadata" are available over the websocket. Aside from
the log files, currently the only root available is "gcodes" (at `http:\\host\server\files\gcodes\*`), however support for other "root"
directories may be added in the future. File upload, file delete, and
directory manipulation(mkdir and rmdir) will only be available on the "gcodes"
root.
### List Available Files
Walks through a directory and fetches all files. All file names include a
path relative to the specified "root". Note that if the query st
- HTTP command:\
`GET /server/files/list?root=gcodes`
If the query string is omitted then the command will return
the "gcodes" file list by default.
- Websocket command:\
`{jsonrpc: "2.0", method: "get_file_list", params: {root: "gcodes"}
, id: <request id>}`
If `params` are are omitted then the command will return the "gcodes"
file list.
- Returns:\
A list of objects containing file data in the following format:
```json
[
{filename: "file name",
size: <file size>,
modified: "last modified date",
...]
```
### Get GCode Metadata
Get file metadata for a specified gcode file. If the file is located in
a subdirectory, then the file name should include the path relative to
the "gcodes" root. For example, if the file is located at:\
`http://host/server/files/gcodes/my_sub_dir/my_print.gcode`
Then the filename should be `my_sub_dir/my_print.gcode`.
- HTTP command:\
`GET /server/files/metadata?filename=<filename>`
- Websocket command:\
`{jsonrpc: "2.0", method: "get_metadata", params: {filename: "filename"}
, id: <request id>}`
- Returns:\
Metadata for the requested file if it exists. If any fields failed
parsing they will be omitted. The metadata will always include the file name,
modified time, and size.
```json
{
filename: "file name",
size: <file size>,
modified: "last modified date",
slicer: "Slicer Name",
first_layer_height: <in mm>,
layer_height: <in mm>,
object_height: <in mm>,
estimated_time: <time in seconds>,
filament_total: <in mm>,
thumbnails: [
{
width: <in pixels>,
height: <in pixels>,
size: <length of string>,
data: <base64 string>
}, ...
]
}
```
### Get directory information
Returns a list of files and subdirectories given a supplied path.
Unlike `/server/files/list`, this command does not walk through
subdirectories.
- HTTP command:\
`GET /server/files/directory?path=gcodes/my_subdir`
If the query string is omitted then the command will return
the "gcodes" file list by default.
- Websocket command:\
Not Available
- Returns:\
An object containing file and subdirectory information in the
following format:
```json
{
files: [
{
filename: "file name",
size: <file size>,
modified: "last modified date"
}, ...
],
dirs: [
{
dirname: "directory name",
modified: "last modified date"
}
]
}
```
### Make new directory
Creates a new directory at the specified path.
- HTTP command:\
`POST /server/files/directory?path=gcodes/my_new_dir`
- Websocket command:\
Not Available
Returns:\
`ok` if successful
### Delete directory
Deletes a directory at the specified path.
- HTTP command:\
`DELETE /server/files/directory?path=gcodes/my_subdir`
- Websocket command:\
Not Available
If the specified directory contains files then the delete request
will fail, however it is possible to "force" deletion of the directory
and all files in it with and additional argument in the query string:\
`DELETE /server/files/directory?path=gcodes/my_subdir&force=true`
Note that a forced deletion will still check in with Klippy to be sure
that a file in the requested directory is not loaded by the virtual_sdcard.
- Returns:\
`ok` if successful
### Gcode File Download
- HTTP command:\
`GET /server/files/gcodes/<file_name>`
- Websocket command:\
Not Available
- Returns:\
The requested file
### File Upload
Upload a file to the "gcodes" root. A relative path may be added to the file
to upload to a subdirectory.
- HTTP command:\
`POST /server/files/upload`
The file to be uploaded should be added to the FormData per the XHR spec.
Optionally, a "print" attribute may be added to the form data. If set
to "true", Klippy will attempt to start the print after uploading. Note that
this value should be a string type, not boolean. This provides compatibility
with Octoprint's legacy upload API.
- Websocket command:\
Not Available
- Returns:\
The HTTP API returns the file name along with a successful response.
### GCode File Delete
Delete a file in the "gcodes" root. A relative path may be added to the file
to delete a file in a subdirectory.
- HTTP command:\
`DELETE /server/files/gcodes/<file_name>`
- Websocket command:\
Not Available
- Returns:\
The HTTP request returns the name of the deleted file.
### Download klippy.log
- HTTP command:\
`GET /server/files/klippy.log`
- Websocket command:\
Not Available
- Returns:\
klippy.log
### Download moonraker.log
- HTTP command:\
`GET /server/files/moonraker.log`
- Websocket command:\
Not Available
- Returns:\
moonraker.log
## Authorization
Untrusted Clients must use a key to access the API by including it in the
`X-Api-Key` header for each HTTP Request. The API below allows authorized
clients to receive and change the current API Key.
### Get the Current API Key
- HTTP command:\
`GET /access/api_key`
- Websocket command:\
Not Available
- Returns:\
The current API key
### Generate a New API Key
- HTTP command:\
`POST /access/api_key`
- Websocket command:\
Not available
- Returns:\
The newly generated API key. This overwrites the previous key. Note that
the API key change is applied immediately, all subsequent HTTP requests
from untrusted clients must use the new key.
### Generate a Oneshot Token
Some HTTP Requests do not expose the ability the change the headers, which is
required to apply the `X-Api-Key`. To accomodiate these requests it a client
may ask the server for a Oneshot Token. Tokens expire in 5 seconds and may
only be used once, making them relatively for inclusion in the query string.
- HTTP command:\
`GET /access/oneshot_token`
- Websocket command:
Not available
- Returns:\
A temporary token that may be added to a requests query string for access
to any API endpoint. The query string should be added in the form of:
`?token=randomly_generated_token`
## Websocket notifications
Printer generated events are sent over the websocket as JSON-RPC 2.0
notifications. These notifications are sent to all connected clients
in the following format:
`{jsonrpc: "2.0", method: <event method name>, params: [<event state>]}`
It is important to keep in mind that the `params` value will always be
wrapped in an array as directed by the JSON-RPC standard. Currently
all notifications available are broadcast with a single parameter.
### Gcode response:
All calls to gcode.respond() are forwarded over the websocket. They arrive
as a "gcode_response" notification:
`{jsonrpc: "2.0", method: "notify_gcode_response", params: ["response"]}`
### Status subscriptions:
Status Subscriptions arrive as a "notify_status_update" notification:
`{jsonrpc: "2.0", method: "notify_status_update", params: [<status_data>]}`
The structure of the status data is identical to the structure that is
returned from a status request.
### Klippy Process State Changed:
The following Klippy state changes are broadcast over the websocket:
- ready
- disconnect
- shutdown
Note that Klippy's "ready" is different from the Printer's "ready". The
Klippy "ready" state is broadcast upon startup after initialization is
complete. It should also be noted that the websocket will be disconnected
after the "disconnect" state, as that notification is broadcast prior to a
restart. Klippy State notifications are broadcast in the following format:
`{jsonrpc: "2.0", method: "notify_klippy_state_changed", params: [<state>]}`
### File List Changed
When a client makes a change to the virtual sdcard file list
(via upload or delete) a notification is broadcast to alert all connected
clients of the change:
`{jsonrpc: "2.0", method: "notify_filelist_changed", params: [<file changed info>]}`
The <file changed info> param is an object in the following format:
```json
{action: "<action>", filename: "<file_name>", filelist: [<file_list>]}
```
The `action` is the operation that resulted in a file list change, the `filename`
is the name of the file the action was performed on, and the `filelist` is the current
file list, returned in the same format as `get_file_list`.
# Appendix
### Websocket setup
All transmissions over the websocket are done via json using the JSON-RPC 2.0
protocol. While the websever expects a json encoded string, one limitation
of Eventlet's websocket is that it can not send string encoded frames. Thus
the client will receive data om the server in the form of a binary Blob that
must be read using a FileReader object then decoded.
The websocket is located at `ws://host:port/websocket`, for example:
```javascript
var s = new WebSocket("ws://" + location.host + "/websocket");
```
It also should be noted that if authorization is enabled, an untrusted client
must request a "oneshot token" and add that token's value to the websocket's
query string:
```
ws://host:port/websocket?token=<32 character base32 string>
```
This is necessary as it isn't currently possible to add `X-Api-Key` to a
websocket's request header.
The following startup sequence is recommened for clients which make use of
the websocket:
1) Attempt to connect to `/websocket` until successful using a timer-like
mechanism
2) Once connected, query `/printer/info` (or `get_printer_info`) for the ready
status.
- If the response returns an error (such as 404), set a timeout for
2 seconds and try again.
- If the response returns success, check the result's `is_ready` attribute
to determine if Klipper is ready.
- If Klipper is ready you may proceed to request status of printer objects
make subscriptions, get the file list, etc.
- If not ready check `error_detected` to see if Klippy has experienced an
error.
- If an error is detected it might be wise to prompt the user. You can
get a description of the error from the `message` attribute
- If no error then re-request printer info in 2s.
- Repeat step 2s until Klipper reports ready. T
- Client's should watch for the `notify_klippy_state_changed` event. If it reports
disconnected then Klippy has either been stopped or restarted. In this
instance the client should repeat the steps above to determine when
klippy is ready.

410
moonraker/app.py Normal file
View File

@ -0,0 +1,410 @@
# Klipper Web Server Rest API
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
import os
import mimetypes
import logging
import tornado
from inspect import isclass
from tornado.routing import Rule, PathMatches, AnyMatches
from utils import DEBUG, ServerError
from websockets import WebsocketManager, WebSocket
from authorization import AuthorizedRequestHandler, AuthorizedFileHandler
from authorization import Authorization
# Max Upload Size of 200MB
MAX_UPLOAD_SIZE = 200 * 1024 * 1024
# These endpoints are reserved for klippy/server communication only and are
# not exposed via http or the websocket
RESERVED_ENDPOINTS = [
"list_endpoints", "moonraker/check_ready", "moonraker/get_configuration"
]
# Status objects require special parsing
def _status_parser(request):
query_args = request.query_arguments
args = {}
for key, vals in query_args.items():
parsed = []
for v in vals:
if v:
parsed += v.decode().split(',')
args[key] = parsed
return args
# Built-in Query String Parser
def _default_parser(request):
query_args = request.query_arguments
args = {}
for key, vals in query_args.items():
if len(vals) != 1:
raise tornado.web.HTTPError(404, "Invalid Query String")
args[key] = vals[0].decode()
return args
class MutableRouter(tornado.web.ReversibleRuleRouter):
def __init__(self, application):
self.application = application
self.pattern_to_rule = {}
super(MutableRouter, self).__init__(None)
def get_target_delegate(self, target, request, **target_params):
if isclass(target) and issubclass(target, tornado.web.RequestHandler):
return self.application.get_handler_delegate(
request, target, **target_params)
return super(MutableRouter, self).get_target_delegate(
target, request, **target_params)
def has_rule(self, pattern):
return pattern in self.pattern_to_rule
def add_handler(self, pattern, target, target_params):
if pattern in self.pattern_to_rule:
self.remove_handler(pattern)
new_rule = Rule(PathMatches(pattern), target, target_params)
self.pattern_to_rule[pattern] = new_rule
self.rules.append(new_rule)
def remove_handler(self, pattern):
rule = self.pattern_to_rule.pop(pattern)
if rule is not None:
try:
self.rules.remove(rule)
except Exception:
logging.exception("Unable to remove rule: %s" % (pattern))
class APIDefinition:
def __init__(self, endpoint, http_uri, ws_method,
request_methods, parser):
self.endpoint = endpoint
self.uri = http_uri
self.ws_method = ws_method
if not isinstance(request_methods, list):
request_methods = [request_methods]
self.request_methods = request_methods
self.parser = parser
class MoonrakerApp:
def __init__(self, server, args):
self.server = server
self.tornado_server = None
self.api_cache = {}
self.registered_base_handlers = []
# Set Up Websocket and Authorization Managers
self.wsm = WebsocketManager(server)
self.auth = Authorization(args.apikey)
mimetypes.add_type('text/plain', '.log')
mimetypes.add_type('text/plain', '.gcode')
# Set up HTTP only requests
self.mutable_router = MutableRouter(self)
app_handlers = [
(AnyMatches(), self.mutable_router),
(r"/websocket", WebSocket,
{'wsm': self.wsm, 'auth': self.auth}),
(r"/api/version", EmulateOctoprintHandler,
{'server': server, 'auth': self.auth})]
self.app = tornado.web.Application(
app_handlers,
serve_traceback=DEBUG,
websocket_ping_interval=10,
websocket_ping_timeout=30,
enable_cors=False)
self.get_handler_delegate = self.app.get_handler_delegate
# Register handlers
self.register_static_file_handler("moonraker.log", args.logfile)
self.auth.register_handlers(self)
def listen(self, host, port):
self.tornado_server = self.app.listen(
port, address=host, max_body_size=MAX_UPLOAD_SIZE,
xheaders=True)
async def close(self):
if self.tornado_server is not None:
self.tornado_server.stop()
await self.wsm.close()
self.auth.close()
def load_config(self, config):
if 'enable_cors' in config:
self.app.settings['enable_cors'] = config['enable_cors']
self.auth.load_config(config)
def register_remote_handler(self, endpoint):
if endpoint in RESERVED_ENDPOINTS:
return
api_def = self.api_cache.get(
endpoint, self._create_api_definition(endpoint))
if api_def.uri in self.registered_base_handlers:
# reserved handler or already registered
return
logging.info("Registering remote endpoint: (%s) %s" % (
" ".join(api_def.request_methods), api_def.uri))
self.wsm.register_handler(api_def)
params = {}
params['server'] = self.server
params['auth'] = self.auth
params['methods'] = api_def.request_methods
params['arg_parser'] = api_def.parser
params['remote_callback'] = api_def.endpoint
self.mutable_router.add_handler(
api_def.uri, RemoteRequestHandler, params)
self.registered_base_handlers.append(api_def.uri)
def register_local_handler(self, uri, ws_method, request_methods,
callback, http_only=False):
if uri in self.registered_base_handlers:
return
api_def = self._create_api_definition(
uri, ws_method, request_methods)
logging.info("Registering local endpoint: (%s) %s" % (
" ".join(request_methods), uri))
if not http_only:
self.wsm.register_handler(api_def, callback)
params = {}
params['server'] = self.server
params['auth'] = self.auth
params['methods'] = request_methods
params['arg_parser'] = api_def.parser
params['callback'] = callback
self.mutable_router.add_handler(uri, LocalRequestHandler, params)
self.registered_base_handlers.append(uri)
def register_static_file_handler(self, pattern, file_path,
can_delete=False, op_check_cb=None):
if pattern[0] != "/":
pattern = "/server/files/" + pattern
if os.path.isfile(file_path):
pattern += '()'
elif os.path.isdir(file_path):
if pattern[-1] != "/":
pattern += "/"
pattern += "(.*)"
else:
logging.info("Invalid file path: %s" % (file_path))
return
methods = ['GET']
if can_delete:
methods.append('DELETE')
params = {
'server': self.server, 'auth': self.auth,
'path': file_path, 'methods': methods, 'op_check_cb': op_check_cb}
self.mutable_router.add_handler(pattern, FileRequestHandler, params)
def register_upload_handler(self, pattern, upload_path, op_check_cb=None):
params = {
'server': self.server, 'auth': self.auth,
'path': upload_path, 'op_check_cb': op_check_cb}
self.mutable_router.add_handler(pattern, FileUploadHandler, params)
def remove_handler(self, endpoint):
api_def = self.api_cache.get(endpoint)
if api_def is not None:
self.wsm.remove_handler(api_def.uri)
self.mutable_router.remove_handler(api_def.ws_method)
def _create_api_definition(self, endpoint, ws_method=None,
request_methods=['GET', 'POST']):
if endpoint in self.api_cache:
return self.api_cache[endpoint]
if endpoint[0] == '/':
uri = endpoint
else:
uri = "/printer/" + endpoint
if ws_method is None:
ws_method = uri[1:].replace('/', '_')
if endpoint.startswith("objects/"):
parser = _status_parser
else:
parser = _default_parser
api_def = APIDefinition(endpoint, uri, ws_method,
request_methods, parser)
self.api_cache[endpoint] = api_def
return api_def
# ***** Dynamic Handlers*****
class RemoteRequestHandler(AuthorizedRequestHandler):
def initialize(self, remote_callback, server, auth,
methods, arg_parser):
super(RemoteRequestHandler, self).initialize(server, auth)
self.remote_callback = remote_callback
self.methods = methods
self.query_parser = arg_parser
async def get(self):
if 'GET' in self.methods:
await self._process_http_request('GET')
else:
raise tornado.web.HTTPError(405)
async def post(self):
if 'POST' in self.methods:
await self._process_http_request('POST')
else:
raise tornado.web.HTTPError(405)
async def _process_http_request(self, method):
args = {}
if self.request.query:
args = self.query_parser(self.request)
request = self.server.make_request(
self.remote_callback, method, args)
result = await request.wait()
if isinstance(result, ServerError):
raise tornado.web.HTTPError(
result.status_code, str(result))
self.finish({'result': result})
class LocalRequestHandler(AuthorizedRequestHandler):
def initialize(self, callback, server, auth,
methods, arg_parser):
super(LocalRequestHandler, self).initialize(server, auth)
self.callback = callback
self.methods = methods
self.query_parser = arg_parser
async def get(self):
if 'GET' in self.methods:
await self._process_http_request('GET')
else:
raise tornado.web.HTTPError(405)
async def post(self):
if 'POST' in self.methods:
await self._process_http_request('POST')
else:
raise tornado.web.HTTPError(405)
async def delete(self):
if 'DELETE' in self.methods:
await self._process_http_request('DELETE')
else:
raise tornado.web.HTTPError(405)
async def _process_http_request(self, method):
args = {}
if self.request.query:
args = self.query_parser(self.request)
try:
result = await self.callback(self.request.path, method, args)
except ServerError as e:
raise tornado.web.HTTPError(
e.status_code, str(e))
self.finish({'result': result})
class FileRequestHandler(AuthorizedFileHandler):
def initialize(self, server, auth, path, methods,
op_check_cb=None, default_filename=None):
super(FileRequestHandler, self).initialize(
server, auth, path, default_filename)
self.methods = methods
self.op_check_cb = op_check_cb
def set_extra_headers(self, path):
# The call below shold never return an empty string,
# as the path should have already been validated to be
# a file
basename = os.path.basename(self.absolute_path)
self.set_header(
"Content-Disposition", "attachment; filename=%s" % (basename))
async def delete(self, path):
if 'DELETE' not in self.methods:
raise tornado.web.HTTPError(405)
# Use the same method Tornado uses to validate the path
self.path = self.parse_url_path(path)
del path # make sure we don't refer to path instead of self.path again
absolute_path = self.get_absolute_path(self.root, self.path)
self.absolute_path = self.validate_absolute_path(
self.root, absolute_path)
if self.op_check_cb is not None:
try:
await self.op_check_cb(self.absolute_path)
except ServerError as e:
if e.status_code == 403:
raise tornado.web.HTTPError(
403, "File is loaded, DELETE not permitted")
os.remove(self.absolute_path)
filename = os.path.basename(self.absolute_path)
self.server.notify_filelist_changed(filename, 'removed')
self.finish({'result': filename})
class FileUploadHandler(AuthorizedRequestHandler):
def initialize(self, server, auth, path, op_check_cb=None,):
super(FileUploadHandler, self).initialize(server, auth)
self.op_check_cb = op_check_cb
self.file_path = path
async def post(self):
start_print = False
print_args = self.request.arguments.get('print', [])
if print_args:
start_print = print_args[0].lower() == "true"
upload = self.get_file()
filename = "_".join(upload['filename'].strip().split())
full_path = os.path.join(self.file_path, filename)
# Make sure the file isn't currently loaded
ongoing = False
if self.op_check_cb is not None:
try:
ongoing = await self.op_check_cb(full_path)
except ServerError as e:
if e.status_code == 403:
raise tornado.web.HTTPError(
403, "File is loaded, upload not permitted")
else:
# Couldn't reach Klippy, so it should be safe
# to permit the upload but not start
start_print = False
# Don't start if another print is currently in progress
start_print = start_print and not ongoing
try:
with open(full_path, 'wb') as fh:
fh.write(upload['body'])
self.server.notify_filelist_changed(filename, 'added')
except Exception:
raise tornado.web.HTTPError(500, "Unable to save file")
if start_print:
# Make a Klippy Request to "Start Print"
gcode_apis = self.server.lookup_plugin('gcode_apis')
try:
await gcode_apis.gcode_start_print(
self.request.path, 'POST', {'filename': filename})
except ServerError:
# Attempt to start print failed
start_print = False
self.finish({'result': filename, 'print_started': start_print})
def get_file(self):
# File uploads must have a single file request
if len(self.request.files) != 1:
raise tornado.web.HTTPError(
400, "Bad Request, can only process a single file upload")
f_list = list(self.request.files.values())[0]
if len(f_list) != 1:
raise tornado.web.HTTPError(
400, "Bad Request, can only process a single file upload")
return f_list[0]
class EmulateOctoprintHandler(AuthorizedRequestHandler):
def get(self):
self.finish({
'server': "1.1.1",
'api': "0.1",
'text': "OctoPrint Upload Emulator"})

219
moonraker/authorization.py Normal file
View File

@ -0,0 +1,219 @@
# API Key Based Authorization
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
import base64
import uuid
import os
import time
import logging
import tornado
from tornado.ioloop import IOLoop, PeriodicCallback
TOKEN_TIMEOUT = 5
CONNECTION_TIMEOUT = 3600
PRUNE_CHECK_TIME = 300 * 1000
class Authorization:
def __init__(self, api_key_file):
self.api_key_loc = os.path.expanduser(api_key_file)
self.api_key = self._read_api_key()
self.auth_enabled = True
self.trusted_ips = []
self.trusted_ranges = []
self.trusted_connections = {}
self.access_tokens = {}
self.prune_handler = PeriodicCallback(
self._prune_conn_handler, PRUNE_CHECK_TIME)
self.prune_handler.start()
def load_config(self, config):
self.auth_enabled = config.get("require_auth", self.auth_enabled)
self.trusted_ips = config.get("trusted_ips", self.trusted_ips)
self.trusted_ranges = config.get("trusted_ranges", self.trusted_ranges)
self._reset_trusted_connections()
logging.info(
"Authorization Configuration Loaded\n"
"Auth Enabled: %s\n"
"Trusted IPs:\n%s\n"
"Trusted IP Ranges:\n%s" %
(self.auth_enabled,
('\n').join(self.trusted_ips),
('\n').join(self.trusted_ranges)))
def register_handlers(self, app):
# Register Authorization Endpoints
app.register_local_handler(
"/access/api_key", None, ['GET', 'POST'],
self._handle_apikey_request, http_only=True)
app.register_local_handler(
"/access/oneshot_token", None, ['GET'],
self._handle_token_request, http_only=True)
async def _handle_apikey_request(self, path, method, args):
if method.upper() == 'POST':
self.api_key = self._create_api_key()
return self.api_key
async def _handle_token_request(self, path, method, args):
return self.get_access_token()
def _read_api_key(self):
if os.path.exists(self.api_key_loc):
with open(self.api_key_loc, 'r') as f:
api_key = f.read()
return api_key
# API Key file doesn't exist. Generate
# a new api key and create the file.
logging.info(
"No API Key file found, creating new one at:\n%s"
% (self.api_key_loc))
return self._create_api_key()
def _create_api_key(self):
api_key = uuid.uuid4().hex
with open(self.api_key_loc, 'w') as f:
f.write(api_key)
return api_key
def _reset_trusted_connections(self):
valid_conns = {}
for ip, access_time in self.trusted_connections.items():
if ip in self.trusted_ips or \
ip[:ip.rfind('.')] in self.trusted_ranges:
valid_conns[ip] = access_time
else:
logging.info(
"Connection [%s] no longer trusted, removing" % (ip))
self.trusted_connections = valid_conns
def _prune_conn_handler(self):
cur_time = time.time()
expired_conns = []
for ip, access_time in self.trusted_connections.items():
if cur_time - access_time > CONNECTION_TIMEOUT:
expired_conns.append(ip)
for ip in expired_conns:
self.trusted_connections.pop(ip)
logging.info(
"Trusted Connection Expired, IP: %s" % (ip))
def _token_expire_handler(self, token):
self.access_tokens.pop(token)
def is_enabled(self):
return self.auth_enabled
def get_access_token(self):
token = base64.b32encode(os.urandom(20)).decode()
ioloop = IOLoop.current()
self.access_tokens[token] = ioloop.call_later(
TOKEN_TIMEOUT, self._token_expire_handler, token)
return token
def _check_trusted_connection(self, ip):
if ip is not None:
if ip in self.trusted_connections:
self.trusted_connections[ip] = time.time()
return True
elif ip in self.trusted_ips or \
ip[:ip.rfind('.')] in self.trusted_ranges:
logging.info(
"Trusted Connection Detected, IP: %s"
% (ip))
self.trusted_connections[ip] = time.time()
return True
return False
def _check_access_token(self, token):
if token in self.access_tokens:
token_handler = self.access_tokens.pop(token)
IOLoop.current().remove_timeout(token_handler)
return True
else:
return False
def check_authorized(self, request):
# Authorization is disabled, request may pass
if not self.auth_enabled:
return True
# Check if IP is trusted
ip = request.remote_ip
if self._check_trusted_connection(ip):
return True
# Check API Key Header
key = request.headers.get("X-Api-Key")
if key and key == self.api_key:
return True
# Check one-shot access token
token = request.arguments.get('token', [b""])[0].decode()
if self._check_access_token(token):
return True
return False
def close(self):
self.prune_handler.stop()
class AuthorizedRequestHandler(tornado.web.RequestHandler):
def initialize(self, server, auth):
self.server = server
self.auth = auth
def prepare(self):
if not self.auth.check_authorized(self.request):
raise tornado.web.HTTPError(401, "Unauthorized")
def set_default_headers(self):
if self.settings['enable_cors']:
self.set_header("Access-Control-Allow-Origin", "*")
self.set_header(
"Access-Control-Allow-Methods",
"GET, POST, PUT, DELETE, OPTIONS")
self.set_header(
"Access-Control-Allow-Headers",
"Origin, Accept, Content-Type, X-Requested-With, "
"X-CRSF-Token")
def options(self, *args, **kwargs):
# Enable CORS if configured
if self.settings['enable_cors']:
self.set_status(204)
self.finish()
else:
super(AuthorizedRequestHandler, self).options()
# Due to the way Python treats multiple inheritance its best
# to create a separate authorized handler for serving files
class AuthorizedFileHandler(tornado.web.StaticFileHandler):
def initialize(self, server, auth, path, default_filename=None):
super(AuthorizedFileHandler, self).initialize(path, default_filename)
self.server = server
self.auth = auth
def prepare(self):
if not self.auth.check_authorized(self.request):
raise tornado.web.HTTPError(401, "Unauthorized")
def set_default_headers(self):
if self.settings['enable_cors']:
self.set_header("Access-Control-Allow-Origin", "*")
self.set_header(
"Access-Control-Allow-Methods",
"GET, POST, PUT, DELETE, OPTIONS")
self.set_header(
"Access-Control-Allow-Headers",
"Origin, Accept, Content-Type, X-Requested-With, "
"X-CRSF-Token")
def options(self, *args, **kwargs):
# Enable CORS if configured
if self.settings['enable_cors']:
self.set_status(204)
self.finish()
else:
super(AuthorizedFileHandler, self).options()

460
moonraker/moonraker.py Normal file
View File

@ -0,0 +1,460 @@
# Moonraker - HTTP/Websocket API Server for Klipper
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
import argparse
import importlib
import os
import time
import socket
import logging
import json
import errno
import tornado
import tornado.netutil
from tornado import gen
from tornado.ioloop import IOLoop, PeriodicCallback
from tornado.util import TimeoutError
from tornado.locks import Event
from app import MoonrakerApp
from utils import ServerError, DEBUG
INIT_MS = 1000
CORE_PLUGINS = [
'file_manager', 'gcode_apis', 'machine',
'temperature_store', 'shell_command']
class Sentinel:
pass
class Server:
error = ServerError
def __init__(self, args):
self.host = args.address
self.port = args.port
# Options configurable by Klippy
self.request_timeout = 5.
self.long_running_gcodes = {}
self.long_running_requests = {}
# Event initialization
self.events = {}
# Klippy Connection Handling
socketfile = os.path.normpath(os.path.expanduser(args.socketfile))
self.klippy_server_sock = tornado.netutil.bind_unix_socket(
socketfile, backlog=1)
self.remove_server_sock = tornado.netutil.add_accept_handler(
self.klippy_server_sock, self._handle_klippy_connection)
self.klippy_sock = None
self.is_klippy_connected = False
self.is_klippy_ready = False
self.server_configured = False
self.partial_data = b""
# Server/IOLoop
self.server_running = False
self.moonraker_app = app = MoonrakerApp(self, args)
self.io_loop = IOLoop.current()
self.init_cb = PeriodicCallback(self._initialize, INIT_MS)
# Plugin initialization
self.plugins = {}
self.register_endpoint = app.register_local_handler
self.register_static_file_handler = app.register_static_file_handler
self.register_upload_handler = app.register_upload_handler
for plugin in CORE_PLUGINS:
self.load_plugin(plugin)
# Setup remote methods accessable to Klippy. Note that all
# registered remote methods should be of the notification type,
# they do not return a response to Klippy after execution
self.pending_requests = {}
self.remote_methods = {}
self.register_remote_method(
'set_klippy_shutdown', self._set_klippy_shutdown)
self.register_remote_method(
'response', self._handle_klippy_response)
self.register_remote_method(
'process_gcode_response', self._process_gcode_response)
self.register_remote_method(
'process_status_update', self._process_status_update)
def start(self):
logging.info(
"Starting Moonraker on (%s, %d)" %
(self.host, self.port))
self.moonraker_app.listen(self.host, self.port)
self.server_running = True
# ***** Plugin Management *****
def load_plugin(self, plugin_name, default=Sentinel):
if plugin_name in self.plugins:
return self.plugins[plugin_name]
# Make sure plugin exists
mod_path = os.path.join(
os.path.dirname(__file__), 'plugins', plugin_name + '.py')
if not os.path.exists(mod_path):
logging.info("Plugin (%s) does not exist" % (plugin_name))
return None
module = importlib.import_module("plugins." + plugin_name)
try:
load_func = getattr(module, "load_plugin")
plugin = load_func(self)
except Exception:
msg = "Unable to load plugin (%s)" % (plugin_name)
if default == Sentinel:
raise ServerError(msg)
return default
self.plugins[plugin_name] = plugin
logging.info("Plugin (%s) loaded" % (plugin_name))
return plugin
def lookup_plugin(self, plugin_name, default=Sentinel):
plugin = self.plugins.get(plugin_name, default)
if plugin == Sentinel:
raise ServerError("Plugin (%s) not found" % (plugin_name))
return plugin
def register_event_handler(self, event, callback):
self.events.setdefault(event, []).append(callback)
def send_event(self, event, *args):
events = self.events.get(event, [])
for evt in events:
self.io_loop.spawn_callback(evt, *args)
def register_remote_method(self, method_name, cb):
if method_name in self.remote_methods:
# XXX - may want to raise an exception here
logging.info("Remote method (%s) already registered"
% (method_name))
return
self.remote_methods[method_name] = cb
# ***** Klippy Connection *****
def _handle_klippy_connection(self, conn, addr):
if self.is_klippy_connected:
logging.info("New Connection received while Klippy Connected")
self.close_client_sock()
logging.info("Klippy Connection Established")
self.is_klippy_connected = True
conn.setblocking(0)
self.klippy_sock = conn
self.io_loop.add_handler(
self.klippy_sock.fileno(), self._handle_klippy_data,
IOLoop.READ | IOLoop.ERROR)
# begin server iniialization
self.init_cb.start()
def _handle_klippy_data(self, fd, events):
if events & IOLoop.ERROR:
self.close_client_sock()
return
try:
data = self.klippy_sock.recv(4096)
except socket.error as e:
# If bad file descriptor allow connection to be
# closed by the data check
if e.errno == errno.EBADF:
data = b''
else:
return
if data == b'':
# Socket Closed
self.close_client_sock()
return
commands = data.split(b'\x03')
commands[0] = self.partial_data + commands[0]
self.partial_data = commands.pop()
for cmd in commands:
try:
decoded_cmd = json.loads(cmd)
method = decoded_cmd.get('method')
params = decoded_cmd.get('params', {})
cb = self.remote_methods.get(method)
if cb is not None:
cb(**params)
else:
logging.info("Unknown command received %s" % cmd.decode())
except Exception:
logging.exception(
"Error processing Klippy Host Response: %s"
% (cmd.decode()))
def klippy_send(self, data):
# TODO: need a mutex or lock to make sure that multiple co-routines
# do not attempt to send
if not self.is_klippy_connected:
return False
retries = 10
data = json.dumps(data).encode() + b"\x03"
while data:
try:
sent = self.klippy_sock.send(data)
except socket.error as e:
if e.errno == errno.EBADF or e.errno == errno.EPIPE \
or not retries:
sent = 0
else:
# XXX - Should pause for 1ms here
retries -= 1
continue
retries = 10
if sent > 0:
data = data[sent:]
else:
logging.info("Error sending client data, closing socket")
self.close_client_sock()
return False
return True
async def _initialize(self):
await self._request_endpoints()
if not self.server_configured:
await self._request_config()
await self._request_ready()
if self.is_klippy_ready:
self.init_cb.stop()
async def _request_endpoints(self):
request = self.make_request("list_endpoints", "GET", {})
result = await request.wait()
if not isinstance(result, ServerError):
endpoints = result.get('hooks', {})
static_paths = result.get('static_paths', {})
for ep in endpoints:
self.moonraker_app.register_remote_handler(ep)
for sp in static_paths:
self.moonraker_app.register_static_file_handler(
sp['resource_id'], sp['file_path'])
async def _request_config(self):
request = self.make_request(
"moonraker/get_configuration", "GET", {})
result = await request.wait()
if not isinstance(result, ServerError):
self._load_config(result)
self.server_configured = True
async def _request_ready(self):
request = self.make_request(
"moonraker/check_ready", "GET", {})
result = await request.wait()
if not isinstance(result, ServerError):
is_ready = result.get("is_ready", False)
if is_ready:
self._set_klippy_ready(result.get('sensors', {}))
def _load_config(self, config):
self.request_timeout = config.get(
'request_timeout', self.request_timeout)
self.long_running_gcodes = config.get(
'long_running_gcodes', self.long_running_gcodes)
self.long_running_requests = config.get(
'long_running_requests', self.long_running_requests)
self.moonraker_app.load_config(config)
# load config for core plugins
for plugin_name in CORE_PLUGINS:
plugin = self.plugins[plugin_name]
if hasattr(plugin, "load_config"):
plugin.load_config(config)
# Load and apply optional plugin Configuration
plugin_cfgs = {name[7:]: cfg for name, cfg in config.items()
if name.startswith("plugin_")}
for name, cfg in plugin_cfgs.items():
plugin = self.plugins.get(name)
if plugin is None:
plugin = self.load_plugin(name)
if hasattr(plugin, "load_config"):
plugin.load_config(cfg)
# Remove plugins that are loaded but no longer configured
valid_plugins = CORE_PLUGINS + list(plugin_cfgs.keys())
self.io_loop.spawn_callback(self._prune_plugins, valid_plugins)
async def _prune_plugins(self, valid_plugins):
for name, plugin in self.plugins.items():
if name not in valid_plugins:
if hasattr(plugin, "close"):
await plugin.close()
self.plugins.pop(name)
def _handle_klippy_response(self, request_id, response):
req = self.pending_requests.pop(request_id)
if req is not None:
if isinstance(response, dict) and 'error' in response:
response = ServerError(response['message'], 400)
req.notify(response)
else:
logging.info("No request matching response: " + str(response))
def _set_klippy_ready(self, sensors):
logging.info("Klippy ready")
self.is_klippy_ready = True
self.send_event("server:refresh_temp_sensors", sensors)
self.send_event("server:klippy_state_changed", "ready")
def _set_klippy_shutdown(self):
logging.info("Klippy has shutdown")
self.is_klippy_ready = False
self.send_event("server:klippy_state_changed", "shutdown")
def _process_gcode_response(self, response):
self.send_event("server:gcode_response", response)
def _process_status_update(self, status):
self.send_event("server:status_update", status)
def make_request(self, path, method, args):
timeout = self.long_running_requests.get(path, self.request_timeout)
if path == "gcode/script":
script = args.get('script', "")
base_gc = script.strip().split()[0].upper()
timeout = self.long_running_gcodes.get(base_gc, timeout)
base_request = BaseRequest(path, method, args, timeout)
self.pending_requests[base_request.id] = base_request
ret = self.klippy_send(base_request.to_dict())
if not ret:
self.pending_requests.pop(base_request.id)
base_request.notify(
ServerError("Klippy Host not connected", 503))
return base_request
def notify_filelist_changed(self, filename, action):
file_manager = self.lookup_plugin('file_manager')
try:
filelist = file_manager.get_file_list(format_list=True)
except ServerError:
filelist = []
result = {'filename': filename, 'action': action,
'filelist': filelist}
self.send_event("server:filelist_changed", result)
async def _kill_server(self):
# XXX - Currently this function is not used.
# Should I expose functionality to shutdown
# or restart the server, or simply remove this?
logging.info(
"Shutting Down Webserver")
for plugin in self.plugins:
if hasattr(plugin, "close"):
await plugin.close()
self.close_client_sock()
self.close_server_sock()
if self.server_running:
self.server_running = False
await self.moonraker_app.close()
self.io_loop.stop()
def close_client_sock(self):
self.is_klippy_ready = False
self.server_configured = False
self.init_cb.stop()
if self.is_klippy_connected:
self.is_klippy_connected = False
logging.info("Klippy Connection Removed")
try:
self.io_loop.remove_handler(self.klippy_sock.fileno())
self.klippy_sock.close()
except socket.error:
logging.exception("Error Closing Client Socket")
self.send_event("server:klippy_state_changed", "disconnect")
def close_server_sock(self):
try:
self.remove_server_sock()
self.klippy_server_sock.close()
# XXX - remove server sock file (or use abstract?)
except Exception:
logging.exception("Error Closing Server Socket")
# Basic WebRequest class, easily converted to dict for json encoding
class BaseRequest:
def __init__(self, path, method, args, timeout=None):
self.id = id(self)
self.path = path
self.method = method
self.args = args
self._timeout = timeout
self._event = Event()
self.response = None
if timeout is not None:
self._timeout = time.time() + timeout
async def wait(self):
# Wait for klippy to process the request or until the timeout
# has been reached.
try:
await self._event.wait(timeout=self._timeout)
except TimeoutError:
logging.info("Request '%s' Timed Out" %
(self.method + " " + self.path))
return ServerError("Klippy Request Timed Out", 500)
return self.response
def notify(self, response):
self.response = response
self._event.set()
def to_dict(self):
return {'id': self.id, 'path': self.path,
'method': self.method, 'args': self.args}
def main():
# Parse start arguments
parser = argparse.ArgumentParser(
description="Moonraker - Klipper API Server")
parser.add_argument(
"-a", "--address", default='0.0.0.0', metavar='<address>',
help="host name or ip to bind to the Web Server")
parser.add_argument(
"-p", "--port", type=int, default=7125, metavar='<port>',
help="port the Web Server will listen on")
parser.add_argument(
"-s", "--socketfile", default="/tmp/moonraker", metavar='<socketfile>',
help="file name and location for the Unix Domain Socket")
parser.add_argument(
"-l", "--logfile", default="/tmp/moonraker.log", metavar='<logfile>',
help="log file name and location")
parser.add_argument(
"-k", "--apikey", default="~/.moonraker_api_key",
metavar='<apikeyfile>', help="API Key file location")
cmd_line_args = parser.parse_args()
# Setup Logging
log_file = os.path.normpath(os.path.expanduser(cmd_line_args.logfile))
cmd_line_args.logfile = log_file
root_logger = logging.getLogger()
file_hdlr = logging.handlers.TimedRotatingFileHandler(
log_file, when='midnight', backupCount=2)
root_logger.addHandler(file_hdlr)
root_logger.setLevel(logging.INFO)
logging.info("="*25 + "Starting Moonraker..." + "="*25)
formatter = logging.Formatter(
'%(asctime)s [%(filename)s:%(funcName)s()] - %(message)s')
file_hdlr.setFormatter(formatter)
# Start IOLoop and Server
io_loop = IOLoop.current()
try:
server = Server(cmd_line_args)
except Exception:
logging.exception("Moonraker Error")
return
try:
server.start()
io_loop.start()
except Exception:
logging.exception("Server Running Error")
io_loop.close(True)
logging.info("Server Shutdown")
if __name__ == '__main__':
main()

View File

@ -0,0 +1,6 @@
# Package definition for the plugins directory
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.

View File

@ -0,0 +1,285 @@
# Enhanced gcode file management and analysis
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
import os
import shutil
import time
import logging
import json
from tornado.ioloop import IOLoop
from tornado.locks import Lock
VALID_GCODE_EXTS = ['gcode', 'g', 'gco']
PYTHON_BIN = os.path.expanduser("~/moonraker-env/bin/python")
METADATA_SCRIPT = os.path.join(
os.path.dirname(__file__), "../../scripts/extract_metadata.py")
class FileManager:
def __init__(self, server):
self.server = server
self.file_paths = {}
self.file_lists = {}
self.gcode_metadata = {}
self.metadata_lock = Lock()
self.server.register_endpoint(
"/server/files/list", "file_list", ['GET'],
self._handle_filelist_request)
self.server.register_endpoint(
"/server/files/metadata", "file_metadata", ['GET'],
self._handle_metadata_request)
self.server.register_endpoint(
"/server/files/directory", None, ['GET', 'POST', 'DELETE'],
self._handle_directory_request, http_only=True)
def _register_static_files(self, gcode_path):
self.server.register_static_file_handler(
'/server/files/gcodes/', gcode_path, can_delete=True,
op_check_cb=self._handle_operation_check)
self.server.register_upload_handler(
'/server/files/upload', gcode_path,
op_check_cb=self._handle_operation_check)
self.server.register_upload_handler(
'/api/files/local', gcode_path,
op_check_cb=self._handle_operation_check)
def load_config(self, config):
sd = config.get('sd_path', None)
if sd is not None:
sd = os.path.normpath(os.path.expanduser(sd))
if sd != self.file_paths.get('gcodes', ""):
self.file_paths['gcodes'] = sd
self._update_file_list()
self._register_static_files(sd)
def get_sd_directory(self):
return self.file_paths.get('gcodes', "")
async def _handle_filelist_request(self, path, method, args):
root = args.get('root', "gcodes")
return self.get_file_list(format_list=True, base=root)
async def _handle_metadata_request(self, path, method, args):
requested_file = args.get('filename')
metadata = self.gcode_metadata.get(requested_file)
if metadata is None:
raise self.server.error(
"Metadata not available for <%s>" % (requested_file), 404)
metadata['filename'] = requested_file
return metadata
async def _handle_directory_request(self, path, method, args):
directory = args.get('path', "gcodes").strip('/')
dir_parts = directory.split("/")
base = dir_parts[0]
target = "/".join(dir_parts[1:])
if base not in self.file_paths:
raise self.server.error("Invalid base path (%s)" % (base))
root_path = self.file_paths[base]
dir_path = os.path.join(root_path, target)
method = method.upper()
if method == 'GET':
# Get list of files and subdirectories for this target
return self._list_directory(dir_path)
elif method == 'POST' and base == "gcodes":
# Create a new directory
try:
os.mkdir(dir_path)
except Exception as e:
raise self.server.error(str(e))
elif method == 'DELETE' and base == "gcodes":
# Remove a directory
if not os.path.isdir(dir_path):
raise self.server.error(
"Directory does not exist (%s)" % (directory))
if args.get('force', "false").lower() == "true":
# Make sure that the directory does not contain a file
# loaded by the virtual_sdcard
await self._handle_operation_check(dir_path)
shutil.rmtree(dir_path)
else:
try:
os.rmdir(dir_path)
except Exception as e:
raise self.server.error(str(e))
else:
raise self.server.error("Operation Not Supported", 405)
return "ok"
async def _handle_operation_check(self, requested_path):
# Get virtual_sdcard status
request = self.server.make_request(
"objects/status", 'GET', {'virtual_sdcard': []})
result = await request.wait()
if isinstance(result, self.server.error):
raise result
vsd = result.get('virtual_sdcard', {})
loaded_file = vsd.get('filename', "")
gc_path = self.file_paths.get('gcodes', "")
full_path = os.path.join(gc_path, loaded_file)
if os.path.isdir(requested_path):
# Check to see of the loaded file is in the reques
if full_path.startswith(requested_path):
raise self.server.error("File currently in use", 403)
elif full_path == requested_path:
raise self.server.error("File currently in use", 403)
ongoing = vsd.get('total_duration', 0.) > 0.
return ongoing
def _list_directory(self, path):
if not os.path.isdir(path):
raise self.server.error(
"Directory does not exist (%s)" % (path))
flist = {'dirs': [], 'files': []}
for fname in os.listdir(path):
full_path = os.path.join(path, fname)
modified = time.ctime(os.path.getmtime(full_path))
if os.path.isdir(full_path):
flist['dirs'].append({
'dirname': fname,
'modified': modified
})
elif os.path.isfile(full_path):
size = os.path.getsize(full_path)
flist['files'].append(
{'filename': fname,
'modified': modified,
'size': size})
return flist
def _shell_proc_callback(self, result):
try:
proc_resp = json.loads(result.strip())
except Exception:
logging.exception("file_manager: unable to load metadata")
return
proc_log = proc_resp.get('log', [])
for log_msg in proc_log:
logging.info(log_msg)
file_path = proc_resp.pop('file', None)
if file_path is not None:
self.gcode_metadata[file_path] = proc_resp.get('metadata')
async def _update_metadata(self):
async with self.metadata_lock:
exisiting_data = {}
update_list = []
gc_files = dict(self.file_lists.get('gcodes', {}))
gc_path = self.file_paths.get('gcodes', "")
for fname, fdata in gc_files.items():
mdata = self.gcode_metadata.get(fname, {})
if mdata.get('size', "") == fdata.get('size') \
and mdata.get('modified', "") == fdata.get('modified'):
# file metadata has already been extracted
exisiting_data[fname] = mdata
else:
update_list.append(fname)
self.gcode_metadata = exisiting_data
for fname in update_list:
cmd = " ".join([PYTHON_BIN, METADATA_SCRIPT, "-p",
gc_path, "-f", fname])
shell_command = self.server.lookup_plugin('shell_command')
scmd = shell_command.build_shell_command(
cmd, self._shell_proc_callback)
try:
await scmd.run(timeout=4.)
except Exception:
logging.exception("Error running extract_metadata.py")
def _update_file_list(self, base='gcodes'):
# Use os.walk find files in sd path and subdirs
path = self.file_paths.get(base, "")
if path is None:
logging.info("No sd_path set, cannot update")
return
logging.info("Updating File List...")
new_list = {}
for root, dirs, files in os.walk(path, followlinks=True):
for name in files:
ext = name[name.rfind('.')+1:]
if base == 'gcodes' and ext not in VALID_GCODE_EXTS:
continue
full_path = os.path.join(root, name)
r_path = full_path[len(path) + 1:]
size = os.path.getsize(full_path)
modified = time.ctime(os.path.getmtime(full_path))
new_list[r_path] = {'size': size, 'modified': modified}
self.file_lists[base] = new_list
if base == 'gcodes':
ioloop = IOLoop.current()
ioloop.spawn_callback(self._update_metadata)
return dict(new_list)
def get_file_list(self, format_list=False, base='gcodes'):
try:
filelist = self._update_file_list(base)
except Exception:
msg = "Unable to update file list"
logging.exception(msg)
raise self.server.error(msg)
if format_list:
flist = []
for fname in sorted(filelist, key=str.lower):
fdict = {'filename': fname}
fdict.update(filelist[fname])
flist.append(fdict)
return flist
return filelist
def get_file_metadata(self, filename):
if filename[0] == '/':
filename = filename[1:]
# Remove "gcodes" of its added. It is valid for a request to
# include to the root or assume the root is gcodes
if filename.startswith('gcodes/'):
filename = filename[7:]
flist = self.get_file_list()
return self.gcode_metadata.get(filename, flist.get(filename, {}))
def list_dir(self, directory, simple_format=False):
# List a directory relative to its root. Currently the only
# Supported root is "gcodes"
if directory[0] == "/":
directory = directory[1:]
parts = directory.split("/", 1)
root = parts[0]
if root not in self.file_paths:
raise self.server.error(
"Invalid Directory Request: %s" % (directory))
path = self.file_paths[root]
if len(parts) == 1:
dir_path = path
else:
dir_path = os.path.join(path, parts[1])
if not os.path.isdir(dir_path):
raise self.server.error(
"Directory does not exist (%s)" % (dir_path))
flist = self._list_directory(dir_path)
if simple_format:
simple_list = []
for dirobj in flist['dirs']:
simple_list.append("*" + dirobj['dirname'])
for fileobj in flist['files']:
fname = fileobj['filename']
ext = fname[fname.rfind('.')+1:]
if root == "gcodes" and ext in VALID_GCODE_EXTS:
simple_list.append(fname)
return simple_list
return flist
def delete_file(self, path):
parts = path.split("/", 1)
root = parts[0]
if root not in self.file_paths or len(parts) != 2:
raise self.server.error("Invalid file path: %s" % (path))
root_path = self.file_paths[root]
full_path = os.path.join(root_path, parts[1])
if not os.path.isfile(full_path):
raise self.server.error("Invalid file path: %s" % (path))
os.remove(full_path)
def load_plugin(server):
return FileManager(server)

View File

@ -0,0 +1,67 @@
# Map HTTP/Websocket APIs for specific gcode tasks
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
GCODE_ENDPOINT = "gcode/script"
class GCodeAPIs:
def __init__(self, server):
self.server = server
# Register GCode Endpoints
self.server.register_endpoint(
"/printer/print/pause", "printer_print_pause", ['POST'],
self.gcode_pause)
self.server.register_endpoint(
"/printer/print/resume", "printer_print_resume", ['POST'],
self.gcode_resume)
self.server.register_endpoint(
"/printer/print/cancel", "printer_print_cancel", ['POST'],
self.gcode_cancel)
self.server.register_endpoint(
"/printer/print/start", "printer_print_start", ['POST'],
self.gcode_start_print)
self.server.register_endpoint(
"/printer/restart", "printer_restart", ['POST'],
self.gcode_restart)
self.server.register_endpoint(
"/printer/firmware_restart", "printer_firmware_restart", ['POST'],
self.gcode_firmware_restart)
async def _send_gcode(self, script):
args = {'script': script}
request = self.server.make_request(
GCODE_ENDPOINT, 'POST', args)
result = await request.wait()
if isinstance(result, self.server.error):
raise result
return result
async def gcode_pause(self, path, method, args):
return await self._send_gcode("PAUSE")
async def gcode_resume(self, path, method, args):
return await self._send_gcode("RESUME")
async def gcode_cancel(self, path, method, args):
return await self._send_gcode("CANCEL_PRINT")
async def gcode_start_print(self, path, method, args):
filename = args.get('filename')
# XXX - validate that file is on disk
if filename[0] != '/':
filename = '/' + filename
script = "M23 " + filename + "\nM24"
return await self._send_gcode(script)
async def gcode_restart(self, path, method, args):
return await self._send_gcode("RESTART")
async def gcode_firmware_restart(self, path, method, args):
return await self._send_gcode("FIRMWARE_RESTART")
def load_plugin(server):
return GCodeAPIs(server)

View File

@ -0,0 +1,34 @@
# Machine manipulation request handlers
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
import logging
class Machine:
def __init__(self, server):
self.server = server
self.server.register_endpoint(
"/machine/reboot", "machine_reboot", ['POST'],
self._handle_machine_request)
self.server.register_endpoint(
"/machine/shutdown", "machine_shutdown", ['POST'],
self._handle_machine_request)
async def _handle_machine_request(self, path, method, args):
if path == "/machine/shutdown":
cmd = "sudo shutdown now"
elif path == "/machine/reboot":
cmd = "sudo reboot now"
else:
raise self.server.error("Unsupported machine request")
shell_command = self.server.lookup_plugin('shell_command')
scmd = shell_command.build_shell_command(cmd, None)
try:
await scmd.run(timeout=2., verbose=False)
except Exception:
logging.exception("Error running cmd '%s'" % (cmd))
return "ok"
def load_plugin(server):
return Machine(server)

View File

@ -0,0 +1,685 @@
# PanelDue LCD display support
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
import serial
import os
import time
import json
import errno
import logging
import tempfile
from utils import ServerError
from tornado import gen, netutil
from tornado.ioloop import IOLoop
from tornado.locks import Lock
MIN_EST_TIME = 10.
class PanelDueError(ServerError):
pass
class SerialConnection:
def __init__(self, paneldue):
self.ioloop = IOLoop.current()
self.paneldue = paneldue
self.port = ""
self.baud = 57200
self.sendlock = Lock()
self.partial_input = b""
self.ser = self.fd = None
self.connected = False
def load_config(self, config):
port = config.get('serial', None)
baud = int(config.get('baud', 57200))
if port is None:
logging.info("No serial port specified, cannot connect")
return
if port != self.port or baud != self.baud or \
not self.connected:
self.disconnect()
self.port = port
self.baud = baud
self.ioloop.spawn_callback(self._connect)
def disconnect(self):
if self.connected:
if self.fd is not None:
self.ioloop.remove_handler(self.fd)
self.fd = None
self.connected = False
self.ser.close()
self.ser = None
logging.info("PanelDue Disconnected")
async def _connect(self):
start_time = connect_time = time.time()
while not self.connected:
if connect_time > start_time + 30.:
logging.info("Unable to connect, aborting")
break
logging.info("Attempting to connect to: %s" % (self.port))
try:
# XXX - sometimes the port cannot be exclusively locked, this
# would likely be due to a restart where the serial port was
# not correctly closed. Maybe don't use exclusive mode?
self.ser = serial.Serial(
self.port, self.baud, timeout=0, exclusive=True)
except (OSError, IOError, serial.SerialException):
logging.exception("Unable to open port: %s" % (self.port))
await gen.sleep(2.)
connect_time += time.time()
continue
self.fd = self.ser.fileno()
os.set_blocking(self.fd, False)
self.ioloop.add_handler(
self.fd, self._handle_incoming, IOLoop.READ | IOLoop.ERROR)
self.connected = True
logging.info("PanelDue Connected")
def _handle_incoming(self, fd, events):
if events & IOLoop.ERROR:
logging.info("PanelDue Connection Error")
self.disconnect()
return
# Process incoming data using same method as gcode.py
try:
data = os.read(fd, 4096)
except os.error:
return
if not data:
# possibly an error, disconnect
self.disconnect()
logging.info("serial_display: No data received, disconnecting")
return
self.ioloop.spawn_callback(self._process_data, data)
async def _process_data(self, data):
# Remove null bytes, separate into lines
data = data.strip(b'\x00')
lines = data.split(b'\n')
lines[0] = self.partial_input + lines[0]
self.partial_input = lines.pop()
for line in lines:
line = line.strip().decode()
try:
await self.paneldue.process_line(line)
except ServerError:
logging.exception(
"GCode Processing Error: " + line)
self.paneldue.handle_gcode_response(
"!! GCode Processing Error: " + line)
except Exception:
logging.exception("Error during gcode processing")
async def send(self, data):
if self.connected:
async with self.sendlock:
while data:
try:
sent = os.write(self.fd, data)
except os.error as e:
if e.errno == errno.EBADF or e.errno == errno.EPIPE:
sent = 0
else:
await gen.sleep(.001)
continue
if sent:
data = data[sent:]
else:
logging.exception(
"Error writing data, closing serial connection")
self.disconnect()
return
class PanelDue:
def __init__(self, server):
self.server = server
self.ioloop = IOLoop.current()
self.ser_conn = SerialConnection(self)
self.file_manager = self.server.load_plugin('file_manager')
self.kinematics = "none"
self.machine_name = "Klipper"
self.firmware_name = "Repetier | Klipper"
self.last_message = None
self.last_gcode_response = None
self.current_file = ""
self.file_metadata = {}
# Initialize tracked state.
self.printer_state = {
'gcode': {}, 'toolhead': {}, 'virtual_sdcard': {},
'pause_resume': {}, 'fan': {}, 'display_status': {}}
self.available_macros = {}
self.non_trivial_keys = []
self.extruder_count = 0
self.heaters = []
self.is_ready = False
self.is_shutdown = False
# Register server events
self.server.register_event_handler(
"server:klippy_state_changed", self.handle_klippy_state)
self.server.register_event_handler(
"server:status_update", self.handle_status_update)
self.server.register_event_handler(
"server:gcode_response", self.handle_gcode_response)
self.server.register_remote_method(
"paneldue_beep", self.handle_paneldue_beep)
# These commands are directly executued on the server and do not to
# make a request to Klippy
self.direct_gcodes = {
'M20': self._run_paneldue_M20,
'M30': self._run_paneldue_M30,
'M36': self._run_paneldue_M36,
'M408': self._run_paneldue_M408
}
# These gcodes require special parsing or handling prior to being
# sent via Klippy's "gcode/script" api command.
self.special_gcodes = {
'M0': lambda args: "CANCEL_PRINT",
'M23': self._prepare_M23,
'M24': lambda args: "RESUME",
'M25': lambda args: "PAUSE",
'M32': self._prepare_M32,
'M98': self._prepare_M98,
'M120': lambda args: "SAVE_GCODE_STATE STATE=PANELDUE",
'M121': lambda args: "RESTORE_GCODE_STATE STATE=PANELDUE",
'M290': self._prepare_M290,
'M999': lambda args: "FIRMWARE_RESTART"
}
def load_config(self, config):
self.ser_conn.load_config(config)
self.machine_name = config.get('machine_name', self.machine_name)
macros = config.get('macros', None)
if macros is not None:
# The macro's configuration name is the key, whereas the full
# command is the value
macros = [m for m in macros.split('\n') if m.strip()]
self.available_macros = {m.split()[0]: m for m in macros}
else:
self.available_macros = {}
ntkeys = config.get('non_trivial_keys', "Klipper state")
self.non_trivial_keys = [k for k in ntkeys.split('\n') if k.strip()]
self.ioloop.spawn_callback(self.write_response, {'status': 'C'})
logging.info("PanelDue Configured")
async def _klippy_request(self, command, method='GET', args={}):
request = self.server.make_request(command, method, args)
result = await request.wait()
if isinstance(result, self.server.error):
raise PanelDueError(str(result))
return result
async def handle_klippy_state(self, state):
if state == "ready":
await self._process_klippy_ready()
elif state == "shutdown":
await self._process_klippy_shutdown()
elif state == "disconnect":
await self._process_klippy_disconnect()
async def _process_klippy_ready(self):
# Request "info" and "configfile" status
retries = 10
while retries:
try:
printer_info = await self._klippy_request("info")
cfg_status = await self._klippy_request(
"objects/status", args={'configfile': []})
except PanelDueError:
logging.exception("PanelDue initialization request failed")
retries -= 1
if not retries:
raise
await gen.sleep(1.)
continue
break
self.firmware_name = "Repetier | Klipper " + printer_info['version']
config = cfg_status.get('configfile', {}).get('config', {})
printer_cfg = config.get('printer', {})
self.kinematics = printer_cfg.get('kinematics', "none")
logging.info(
"PanelDue Config Received:\n"
"Firmware Name: %s\n"
"Kinematics: %s\n"
"Printer Config: %s\n"
% (self.firmware_name, self.kinematics, str(config)))
# Initalize printer state and make subscription request
self.printer_state = {
'gcode': {}, 'toolhead': {}, 'virtual_sdcard': {},
'pause_resume': {}, 'fan': {}, 'display_status': {}}
sub_args = {'gcode': [], 'toolhead': []}
self.extruder_count = 0
self.heaters = []
for cfg in config:
if cfg.startswith("extruder"):
self.extruder_count += 1
self.printer_state[cfg] = {}
self.heaters.append(cfg)
sub_args[cfg] = []
elif cfg == "heater_bed":
self.printer_state[cfg] = {}
self.heaters.append(cfg)
sub_args[cfg] = []
elif cfg in self.printer_state:
sub_args[cfg] = []
try:
await self._klippy_request(
"objects/subscription", method='POST', args=sub_args)
except PanelDueError:
logging.exception("Unable to complete subscription request")
self.is_shutdown = False
self.is_ready = True
async def _process_klippy_shutdown(self):
self.is_shutdown = True
async def _process_klippy_disconnect(self):
# Tell the PD that we are shutting down
await self.write_response({'status': 'S'})
self.is_ready = False
async def handle_status_update(self, status):
self.printer_state.update(status)
def handle_paneldue_beep(self, frequency, duration):
duration = int(duration * 1000.)
self.ioloop.spawn_callback(
self.write_response,
{'beep_freq': frequency, 'beep_length': duration})
async def process_line(self, line):
# If we find M112 in the line then skip verification
if "M112" in line.upper():
await self._klippy_request("emergency_stop", method='POST')
return
# Get line number
line_index = line.find(' ')
try:
line_no = int(line[1:line_index])
except Exception:
line_index = -1
line_no = None
# Verify checksum
cs_index = line.rfind('*')
try:
checksum = int(line[cs_index+1:])
except Exception:
# Invalid checksum, do not process
msg = "!! Invalid Checksum"
if line_no is not None:
msg = " Line Number: %d" % line_no
logging.exception("PanelDue: " + msg)
raise PanelDueError(msg)
# Checksum is calculated by XORing every byte in the line other
# than the checksum itself
calculated_cs = 0
for c in line[:cs_index]:
calculated_cs ^= ord(c)
if calculated_cs & 0xFF != checksum:
msg = "!! Invalid Checksum"
if line_no is not None:
msg = " Line Number: %d" % line_no
logging.info("PanelDue: " + msg)
raise PanelDueError(msg)
await self._run_gcode(line[line_index+1:cs_index])
async def _run_gcode(self, script):
# Execute the gcode. Check for special RRF gcodes that
# require special handling
parts = script.split()
cmd = parts[0].strip()
# Check for commands that query state and require immediate response
if cmd in self.direct_gcodes:
params = {}
for p in parts[1:]:
arg = p[0].lower() if p[0].lower() in "psr" else "p"
try:
val = int(p[1:].strip()) if arg in "sr" else p[1:].strip()
except Exception:
msg = "paneldue: Error parsing direct gcode %s" % (script)
self.handle_gcode_response("!! " + msg)
logging.exception(msg)
return
params["arg_" + arg] = val
func = self.direct_gcodes[cmd]
await func(**params)
return
# Prepare GCodes that require special handling
if cmd in self.special_gcodes:
func = self.special_gcodes[cmd]
script = func(parts[1:])
try:
args = {'script': script}
await self._klippy_request(
"gcode/script", method='POST', args=args)
except PanelDueError:
msg = "Error executing script %s" % script
self.handle_gcode_response("!! " + msg)
logging.exception(msg)
def _clean_filename(self, filename):
# Remove drive number
if filename.startswith("0:/"):
filename = filename[3:]
# Remove initial "gcodes" folder. This is necessary
# due to the HACK in the paneldue_M20 gcode.
if filename.startswith("gcodes/"):
filename = filename[6:]
elif filename.startswith("/gcodes/"):
filename = filename[7:]
# Start with a "/" so the gcode parser can correctly
# handle files that begin with digits or special chars
if filename[0] != "/":
filename = "/" + filename
return filename
def _prepare_M23(self, args):
filename = self._clean_filename(args[0].strip())
return "M23 " + filename
def _prepare_M32(self, args):
filename = self._clean_filename(args[0].strip())
return "M23 " + filename + "\n" + "M24"
def _prepare_M98(self, args):
macro = args[0][1:].strip()
name_start = macro.rfind('/') + 1
macro = macro[name_start:]
cmd = self.available_macros.get(macro)
if cmd is None:
raise PanelDueError("Macro %s invalid" % (macro))
return cmd
def _prepare_M290(self, args):
# args should in in the format Z0.02
offset = args[0][1:].strip()
return "SET_GCODE_OFFSET Z_ADJUST=%s MOVE=1" % (offset)
def handle_gcode_response(self, response):
# Only queue up "non-trivial" gcode responses. At the
# moment we'll handle state changes and errors
if "Klipper state" in response \
or response.startswith('!!'):
self.last_gcode_response = response
else:
for key in self.non_trivial_keys:
if key in response:
self.last_gcode_response = response
return
async def write_response(self, response):
byte_resp = json.dumps(response) + "\r\n"
await self.ser_conn.send(byte_resp.encode())
def _get_printer_status(self):
# PanelDue States applicable to Klipper:
# I = idle, P = printing from SD, S = stopped (shutdown),
# C = starting up (not ready), A = paused, D = pausing,
# B = busy
if self.is_shutdown:
return 'S'
printer_state = self.printer_state
is_active = printer_state['virtual_sdcard'].get('is_active', False)
paused = printer_state['pause_resume'].get('is_paused', False)
if paused:
if is_active:
return 'D'
else:
return 'A'
if is_active:
return 'P'
if printer_state['gcode'].get('busy', False):
return 'B'
return 'I'
async def _run_paneldue_M408(self, arg_r=None, arg_s=1):
response = {}
sequence = arg_r
response_type = arg_s
if not self.is_ready:
# Klipper is still starting up, do not query status
response['status'] = 'S' if self.is_shutdown else 'C'
await self.write_response(response)
return
# Send gcode responses
if sequence is not None and self.last_gcode_response:
response['seq'] = sequence + 1
response['resp'] = self.last_gcode_response
self.last_gcode_response = None
if response_type == 1:
# Extended response Request
response['myName'] = self.machine_name
response['firmwareName'] = self.firmware_name
response['numTools'] = self.extruder_count
response['geometry'] = self.kinematics
response['axes'] = 3
p_state = self.printer_state
status = self._get_printer_status()
response['status'] = status
response['babystep'] = round(p_state['gcode'].get(
'homing_zpos', 0.), 3)
# Current position
pos = p_state['toolhead'].get('position', [0., 0., 0., 0.])
response['pos'] = [round(p, 2) for p in pos[:3]]
homed_pos = p_state['toolhead'].get('homed_axes', "")
response['homed'] = [int(a in homed_pos) for a in "xyz"]
sfactor = round(p_state['gcode'].get('speed_factor', 1.) * 100, 2)
response['sfactor'] = sfactor
# Print Progress Tracking
sd_status = p_state['virtual_sdcard']
fname = sd_status.get('filename', "")
if fname:
# We know a file has been loaded, initialize metadata
if self.current_file != fname:
self.current_file = fname
self.file_metadata = self.file_manager.get_file_metadata(fname)
progress = p_state['virtual_sdcard'].get('progress', 0)
# progress and print tracking
if progress:
response['fraction_printed'] = round(progress, 3)
est_time = self.file_metadata.get('estimated_time', 0)
if est_time > MIN_EST_TIME:
# file read estimate
times_left = [int(est_time - est_time * progress)]
# filament estimate
est_total_fil = self.file_metadata.get('filament_total')
if est_total_fil:
cur_filament = sd_status.get('filament_used', 0.)
fpct = min(1., cur_filament / est_total_fil)
times_left.append(int(est_time - est_time * fpct))
# object height estimate
obj_height = self.file_metadata.get('object_height')
if obj_height:
cur_height = p_state['gcode'].get('move_zpos', 0.)
hpct = min(1., cur_height / obj_height)
times_left.append(int(est_time - est_time * hpct))
else:
# The estimated time is not in the metadata, however we
# can still provide an estimate based on file progress
duration = sd_status.get('print_duration', 0.)
times_left = [int(duration / progress - duration)]
response['timesLeft'] = times_left
else:
# clear filename and metadata
self.current_file = ""
self.file_metadata = {}
fan_speed = p_state['fan'].get('speed')
if fan_speed is not None:
response['fanPercent'] = [round(fan_speed * 100, 1)]
if self.extruder_count > 0:
extruder_name = p_state['toolhead'].get('extruder')
if extruder_name is not None:
tool = 0
if extruder_name != "extruder":
tool = int(extruder_name[-1])
response['tool'] = tool
# Report Heater Status
efactor = round(p_state['gcode'].get('extrude_factor', 1.) * 100., 2)
for name in self.heaters:
temp = round(p_state[name].get('temperature', 0.0), 1)
target = round(p_state[name].get('target', 0.0), 1)
response.setdefault('heaters', []).append(temp)
response.setdefault('active', []).append(target)
response.setdefault('standby', []).append(target)
response.setdefault('hstat', []).append(2 if target else 0)
if name.startswith('extruder'):
response.setdefault('efactor', []).append(efactor)
response.setdefault('extr', []).append(round(pos[3], 2))
# Display message (via M117)
msg = p_state['display_status'].get('message')
if msg and msg != self.last_message:
response['message'] = msg
# reset the message so it only shows once. The paneldue
# is strange about this, and displays it as a full screen
# notification
self.last_message = msg
await self.write_response(response)
async def _run_paneldue_M20(self, arg_p, arg_s=0):
response_type = arg_s
if response_type != 2:
logging.info(
"PanelDue: Cannot process response type %d in M20"
% (response_type))
return
path = arg_p
# Strip quotes if they exist
path = path.strip('\"')
# Path should come in as "0:/macros, or 0:/<gcode_folder>". With
# repetier compatibility enabled, the default folder is root,
# ie. "0:/"
if path.startswith("0:/"):
path = path[2:]
response = {'dir': path}
response['files'] = []
if path == "/macros":
response['files'] = list(self.available_macros.keys())
else:
# HACK: The PanelDue has a bug where it does not correctly detect
# subdirectories if we return the root as "/". Moonraker can
# support a "gcodes" directory, however we must choose between this
# support or disabling RRF specific gcodes (this is done by
# identifying as Repetier).
# The workaround below converts both "/" and "/gcodes" paths to
# "gcodes".
if path == "/":
response['dir'] = "/gcodes"
path = "gcodes"
elif path.startswith("/gcodes"):
path = path[1:]
flist = self.file_manager.list_dir(path, simple_format=True)
if flist:
response['files'] = flist
await self.write_response(response)
async def _run_paneldue_M30(self, arg_p=None):
# Delete a file. Clean up the file name and make sure
# it is relative to the "gcodes" root.
path = arg_p
path = path.strip('\"')
if path.startswith("0:/"):
path = path[3:]
elif path[0] == "/":
path = path[1:]
if not path.startswith("gcodes/"):
path = "gcodes/" + path
self.file_manager.delete_file(path)
async def _run_paneldue_M36(self, arg_p=None):
response = {}
filename = arg_p
sd_status = self.printer_state.get('virtual_sdcard', {})
if filename is None:
# PanelDue is requesting file information on a
# currently printed file
active = False
if sd_status:
filename = sd_status['filename']
active = sd_status['is_active']
if not filename or not active:
# Either no file printing or no virtual_sdcard
response['err'] = 1
await self.write_response(response)
return
else:
response['fileName'] = filename.split("/")[-1]
# For consistency make sure that the filename begins with the
# "gcodes/" root. The M20 HACK should add this in some cases.
# Ideally we would add support to the PanelDue firmware that
# indicates Moonraker supports a "gcodes" directory.
if not filename.startswith("gcodes/"):
filename = "gcodes/" + filename
metadata = self.file_manager.get_file_metadata(filename)
if metadata:
response['err'] = 0
response['size'] = metadata['size']
# workaround for PanelDue replacing the first "T" found
response['lastModified'] = "T" + metadata['modified']
slicer = metadata.get('slicer')
if slicer is not None:
response['generatedBy'] = slicer
height = metadata.get('object_height')
if height is not None:
response['height'] = round(height, 2)
layer_height = metadata.get('layer_height')
if layer_height is not None:
response['layerHeight'] = round(layer_height, 2)
filament = metadata.get('filament_total')
if filament is not None:
response['filament'] = [round(filament, 1)]
est_time = metadata.get('estimated_time')
if est_time is not None:
response['printTime'] = int(est_time + .5)
else:
response['err'] = 1
await self.write_response(response)
async def close(self):
self.ser_conn.disconnect()
def load_plugin(server):
return PanelDue(server)

View File

@ -0,0 +1,88 @@
# linux shell command execution utility
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
import os
import shlex
import subprocess
import logging
import tornado
from tornado import gen
from tornado.ioloop import IOLoop
class ShellCommand:
def __init__(self, cmd, callback=None):
self.io_loop = IOLoop.current()
self.name = cmd
self.output_cb = callback
cmd = os.path.expanduser(cmd)
self.command = shlex.split(cmd)
self.partial_output = b""
def _process_output(self, fd, events):
if events & IOLoop.ERROR:
return
try:
data = os.read(fd, 4096)
except Exception:
return
data = self.partial_output + data
if b'\n' not in data:
self.partial_output = data
return
elif data[-1] != b'\n':
split = data.rfind(b'\n') + 1
self.partial_output = data[split:]
data = data[:split]
try:
self.output_cb(data)
except Exception:
logging.exception("Error writing command output")
async def run(self, timeout=2., verbose=True):
if not timeout or self.output_cb is None:
# Fire and forget commands cannot be verbose as we can't
# clean up after the process terminates
verbose = False
try:
proc = subprocess.Popen(
self.command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except Exception:
logging.exception(
"shell_command: Command {%s} failed" % (self.name))
return
if verbose:
fd = proc.stdout.fileno()
self.io_loop.add_handler(
fd, self._process_output, IOLoop.READ | IOLoop.ERROR)
elif not timeout:
# fire and forget, return from execution
return
sleeptime = 0
complete = False
while sleeptime < timeout:
await gen.sleep(.05)
sleeptime += .05
if proc.poll() is not None:
complete = True
break
if not complete:
proc.terminate()
if verbose:
if self.partial_output:
self.output_cb(self.partial_output)
self.partial_output = b""
if complete:
msg = "Command {%s} finished\n" % (self.name)
else:
msg = "Command {%s} timed out" % (self.name)
logging.info("shell_command: " + msg)
self.io_loop.remove_handler(fd)
class ShellCommandFactory:
def build_shell_command(self, cmd, callback):
return ShellCommand(cmd, callback)
def load_plugin(server):
return ShellCommandFactory()

View File

@ -0,0 +1,73 @@
# Heater sensor temperature storage
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
import logging
from collections import deque
from tornado.ioloop import IOLoop, PeriodicCallback
TEMPERATURE_UPDATE_MS = 1000
TEMPERATURE_STORE_SIZE = 20 * 60
class TemperatureStore:
def __init__(self, server):
self.server = server
# Temperature Store Tracking
self.last_temps = {}
self.temperature_store = {}
self.temp_update_cb = PeriodicCallback(
self._update_temperature_store, TEMPERATURE_UPDATE_MS)
# Register status update event
self.server.register_event_handler(
"server:status_update", self._set_current_temps)
self.server.register_event_handler(
"server:refresh_temp_sensors", self._init_sensors)
# Register endpoint
self.server.register_endpoint(
"/server/temperature_store", "server_temperature_store", ['GET'],
self._handle_temp_store_request)
def _init_sensors(self, sensors):
logging.info("Configuring available sensors: %s" % (str(sensors)))
new_store = {}
for sensor in sensors:
if sensor in self.temperature_store:
new_store[sensor] = self.temperature_store[sensor]
else:
new_store[sensor] = {
'temperatures': deque(maxlen=TEMPERATURE_STORE_SIZE),
'targets': deque(maxlen=TEMPERATURE_STORE_SIZE)}
self.temperature_store = new_store
self.temp_update_cb.start()
# XXX - spawn a callback that requests temperature updates?
def _set_current_temps(self, data):
for sensor in self.temperature_store:
if sensor in data:
self.last_temps[sensor] = (
round(data[sensor].get('temperature', 0.), 2),
data[sensor].get('target', 0.))
def _update_temperature_store(self):
# XXX - If klippy is not connected, set values to zero
# as they are unknown?
for sensor, (temp, target) in self.last_temps.items():
self.temperature_store[sensor]['temperatures'].append(temp)
self.temperature_store[sensor]['targets'].append(target)
async def _handle_temp_store_request(self, path, method, args):
store = {}
for name, sensor in self.temperature_store.items():
store[name] = {k: list(v) for k, v in sensor.items()}
return store
async def close(self):
self.temp_update_cb.stop()
def load_plugin(server):
return TemperatureStore(server)

31
moonraker/utils.py Normal file
View File

@ -0,0 +1,31 @@
# General Server Utilities
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
import logging
import json
DEBUG = True
class ServerError(Exception):
def __init__(self, message, status_code=400):
Exception.__init__(self, message)
self.status_code = status_code
# XXX - Currently logging over the socket is not implemented.
# I don't think it would be wise to log everything over the
# socket, however it may be useful to log some specific items.
# Decide what to do, then either finish the implementation or
# remove this code
class SocketLoggingHandler(logging.Handler):
def __init__(self, server_manager):
super(SocketLoggingHandler, self).__init__()
self.server_manager = server_manager
def emit(self, record):
record.msg = "[MOONRAKER]: " + record.msg
# XXX - Convert log record to dict before sending,
# the klippy_send function will handle serialization
self.server_manager.klippy_send(record)

234
moonraker/websockets.py Normal file
View File

@ -0,0 +1,234 @@
# Websocket Request/Response Handler
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
import logging
import tornado
import json
from tornado.ioloop import IOLoop
from tornado.websocket import WebSocketHandler, WebSocketClosedError
from utils import ServerError, DEBUG
class JsonRPC:
def __init__(self):
self.methods = {}
def register_method(self, name, method):
self.methods[name] = method
def remove_method(self, name):
self.methods.pop(name)
async def dispatch(self, data):
response = None
try:
request = json.loads(data)
except Exception:
msg = "Websocket data not json: %s" % (str(data))
logging.exception(msg)
response = self.build_error(-32700, "Parse error")
return json.dumps(response)
if DEBUG:
logging.info("Websocket Request::" + data)
if isinstance(request, list):
response = []
for req in request:
resp = await self.process_request(req)
if resp is not None:
response.append(resp)
if not response:
response = None
else:
response = await self.process_request(request)
if response is not None:
response = json.dumps(response)
logging.info("Websocket Response::" + response)
return response
async def process_request(self, request):
req_id = request.get('id', None)
rpc_version = request.get('jsonrpc', "")
method_name = request.get('method', None)
if rpc_version != "2.0" or not isinstance(method_name, str):
return self.build_error(-32600, "Invalid Request", req_id)
method = self.methods.get(method_name, None)
if method is None:
return self.build_error(-32601, "Method not found", req_id)
if 'params' in request:
params = request['params']
if isinstance(params, list):
response = await self.execute_method(method, req_id, *params)
elif isinstance(params, dict):
response = await self.execute_method(method, req_id, **params)
else:
return self.build_error(-32600, "Invalid Request", req_id)
else:
response = await self.execute_method(method, req_id)
return response
async def execute_method(self, method, req_id, *args, **kwargs):
try:
result = await method(*args, **kwargs)
except TypeError as e:
return self.build_error(-32603, "Invalid params", req_id)
except Exception as e:
return self.build_error(-31000, str(e), req_id)
if isinstance(result, ServerError):
return self.build_error(result.status_code, str(result), req_id)
elif req_id is None:
return None
else:
return self.build_result(result, req_id)
def build_result(self, result, req_id):
return {
'jsonrpc': "2.0",
'result': result,
'id': req_id
}
def build_error(self, code, msg, req_id=None):
return {
'jsonrpc': "2.0",
'error': {'code': code, 'message': msg},
'id': req_id
}
class WebsocketManager:
def __init__(self, server):
self.server = server
self.websockets = {}
self.ws_lock = tornado.locks.Lock()
self.rpc = JsonRPC()
# Register events
self.server.register_event_handler(
"server:klippy_state_changed", self._handle_klippy_state_changed)
self.server.register_event_handler(
"server:gcode_response", self._handle_gcode_response)
self.server.register_event_handler(
"server:status_update", self._handle_status_update)
self.server.register_event_handler(
"server:filelist_changed", self._handle_filelist_changed)
async def _handle_klippy_state_changed(self, state):
await self.notify_websockets("klippy_state_changed", state)
async def _handle_gcode_response(self, response):
await self.notify_websockets("gcode_response", response)
async def _handle_status_update(self, status):
await self.notify_websockets("status_update", status)
async def _handle_filelist_changed(self, flist):
await self.notify_websockets("filelist_changed", flist)
def register_handler(self, api_def, callback=None):
for r_method in api_def.request_methods:
cmd = r_method.lower() + '_' + api_def.ws_method
if callback is not None:
# Callback is a local method
rpc_cb = self._generate_local_callback(
api_def.endpoint, r_method, callback)
else:
# Callback is a remote method
rpc_cb = self._generate_callback(api_def.endpoint, r_method)
self.rpc.register_method(cmd, rpc_cb)
def remove_handler(self, ws_method):
for prefix in ["get", "post"]:
self.rpc.remove_method(prefix + "_" + ws_method)
def _generate_callback(self, endpoint, request_method):
async def func(**kwargs):
request = self.server.make_request(
endpoint, request_method, kwargs)
result = await request.wait()
return result
return func
def _generate_local_callback(self, endpoint, request_method, callback):
async def func(**kwargs):
try:
result = await callback(
endpoint, request_method, kwargs)
except ServerError as e:
result = e
return result
return func
def has_websocket(self, ws_id):
return ws_id in self.websockets
async def add_websocket(self, ws):
async with self.ws_lock:
self.websockets[ws.uid] = ws
logging.info("New Websocket Added: %d" % ws.uid)
async def remove_websocket(self, ws):
async with self.ws_lock:
old_ws = self.websockets.pop(ws.uid, None)
if old_ws is not None:
logging.info("Websocket Removed: %d" % ws.uid)
async def notify_websockets(self, name, data):
notification = json.dumps({
'jsonrpc': "2.0",
'method': "notify_" + name,
'params': [data]})
async with self.ws_lock:
for ws in self.websockets.values():
try:
ws.write_message(notification)
except WebSocketClosedError:
self.websockets.pop(ws.uid)
logging.info("Websocket Removed: %d" % ws.uid)
except Exception:
logging.exception(
"Error sending data over websocket: %d" % (ws.uid))
async def close(self):
async with self.ws_lock:
for ws in self.websockets.values():
ws.close()
self.websockets = {}
class WebSocket(WebSocketHandler):
def initialize(self, wsm, auth):
self.wsm = wsm
self.auth = auth
self.rpc = self.wsm.rpc
self.uid = id(self)
async def open(self):
await self.wsm.add_websocket(self)
def on_message(self, message):
io_loop = IOLoop.current()
io_loop.spawn_callback(self._process_message, message)
async def _process_message(self, message):
try:
response = await self.rpc.dispatch(message)
if response is not None:
self.write_message(response)
except Exception:
logging.exception("Websocket Command Error")
def on_close(self):
io_loop = IOLoop.current()
io_loop.spawn_callback(self.wsm.remove_websocket, self)
def check_origin(self, origin):
if self.settings['enable_cors']:
# allow CORS
return True
else:
return super(WebSocket, self).check_origin(origin)
# Check Authorized User
def prepare(self):
if not self.auth.check_authorized(self.request):
raise tornado.web.HTTPError(401, "Unauthorized")

355
scripts/extract_metadata.py Normal file
View File

@ -0,0 +1,355 @@
# GCode metadata extraction utility
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
import json
import argparse
import re
import os
import sys
import time
# regex helpers
def _regex_find_floats(pattern, data, strict=False):
# If strict is enabled, pattern requires a floating point
# value, otherwise it can be an integer value
fptrn = r'\d+\.\d*' if strict else r'\d+\.?\d*'
matches = re.findall(pattern, data)
if matches:
# return the maximum height value found
try:
return [float(h) for h in re.findall(
fptrn, " ".join(matches))]
except Exception:
pass
return []
def _regex_find_ints(pattern, data):
matches = re.findall(pattern, data)
if matches:
# return the maximum height value found
try:
return [int(h) for h in re.findall(
r'\d+', " ".join(matches))]
except Exception:
pass
return []
# Slicer parsing implementations
class BaseSlicer(object):
def __init__(self, name, id_pattern):
self.name = name
self.id_pattern = id_pattern
self.header_data = self.footer_data = self.log = None
def set_data(self, header_data, footer_data, log):
self.header_data = header_data
self.footer_data = footer_data
self.log = log
def get_name(self):
return self.name
def get_id_pattern(self):
return self.id_pattern
def _parse_min_float(self, pattern, data):
result = _regex_find_floats(pattern, data)
if result:
return min(result)
else:
return None
def _parse_max_float(self, pattern, data):
result = _regex_find_floats(pattern, data)
if result:
return max(result)
else:
return None
class PrusaSlicer(BaseSlicer):
def __init__(self, name="PrusaSlicer", id_pattern=r"PrusaSlicer\s.*\son"):
super(PrusaSlicer, self).__init__(name, id_pattern)
def parse_first_layer_height(self):
return self._parse_min_float(
r"; first_layer_height =.*", self.footer_data)
def parse_layer_height(self):
return self._parse_min_float(r"; layer_height =.*", self.footer_data)
def parse_object_height(self):
return self._parse_max_float(r"G1\sZ\d+\.\d*\sF", self.footer_data)
def parse_filament_total(self):
return self._parse_max_float(
r"filament\sused\s\[mm\]\s=\s\d+\.\d*", self.footer_data)
def parse_estimated_time(self):
time_matches = re.findall(
r';\sestimated\sprinting\stime.*', self.footer_data)
if not time_matches:
return None
total_time = 0
time_match = time_matches[0]
time_patterns = [(r"\d+h", 60*60), (r"\d+m", 60), (r"\d+s", 1)]
for pattern, multiplier in time_patterns:
t = _regex_find_ints(pattern, time_match)
if t:
total_time += max(t) * multiplier
return round(total_time, 2)
def parse_thumbnails(self):
thumb_matches = re.findall(
r"; thumbnail begin[;/\+=\w\s]+?; thumbnail end", self.header_data)
if not thumb_matches:
return None
parsed_matches = []
for match in thumb_matches:
lines = re.split(r"\r?\n", match.replace('; ', ''))
info = _regex_find_ints(r".*", lines[0])
data = "".join(lines[1:-1])
if len(info) != 3:
self.log.append(
{'MetadataError': "Error parsing thumbnail header: %s"
% (lines[0])})
continue
if len(data) != info[2]:
self.log.append(
{'MetadataError': "Thumbnail Size Mismatch: detected %d, "
"actual %d" % (info[2], len(data))})
continue
parsed_matches.append({
'width': info[0], 'height': info[1],
'size': info[2], 'data': data})
return parsed_matches
class Slic3rPE(PrusaSlicer):
def __init__(self, name="Slic3r PE",
id_pattern=r"Slic3r\sPrusa\sEdition\s.*\son"):
super(Slic3rPE, self).__init__(name, id_pattern)
def parse_filament_total(self):
return self._parse_max_float(
r"filament\sused\s=\s\d+\.\d+mm", self.footer_data)
def parse_thumbnails(self):
return None
class Slic3r(Slic3rPE):
def __init__(self, name="Slic3r", id_pattern=r"Slic3r\s\d.*\son"):
super(Slic3r, self).__init__(name, id_pattern)
def parse_estimated_time(self):
return None
class SuperSlicer(PrusaSlicer):
def __init__(self, name="SuperSlicer", id_pattern=r"SuperSlicer\s.*\son"):
super(SuperSlicer, self).__init__(name, id_pattern)
class Cura(BaseSlicer):
def __init__(self, name="Cura", id_pattern=r"Cura_SteamEngine.*"):
super(Cura, self).__init__(name, id_pattern)
def parse_first_layer_height(self):
return self._parse_min_float(r";MINZ:.*", self.header_data)
def parse_layer_height(self):
return self._parse_min_float(r";Layer\sheight:.*", self.header_data)
def parse_object_height(self):
return self._parse_max_float(r";MAXZ:.*", self.header_data)
def parse_filament_total(self):
filament = self._parse_max_float(
r";Filament\sused:.*", self.header_data)
if filament is not None:
filament *= 1000
return filament
def parse_estimated_time(self):
return self._parse_max_float(r";TIME:.*", self.header_data)
def parse_thumbnails(self):
return None
class Simplify3D(BaseSlicer):
def __init__(self, name="Simplify3D", id_pattern=r"Simplify3D\(R\)"):
super(Simplify3D, self).__init__(name, id_pattern)
def parse_first_layer_height(self):
return self._parse_min_float(r"G1\sZ\d+\.\d*", self.header_data)
def parse_layer_height(self):
return self._parse_min_float(r";\s+layerHeight,.*", self.header_data)
def parse_object_height(self):
return self._parse_max_float(r"G1\sZ\d+\.\d*", self.footer_data)
def parse_filament_total(self):
return self._parse_max_float(
r";\s+Filament\slength:.*mm", self.footer_data)
def parse_estimated_time(self):
time_matches = re.findall(
r';\s+Build time:.*', self.footer_data)
if not time_matches:
return None
total_time = 0
time_match = time_matches[0]
time_patterns = [(r"\d+\shours", 60*60), (r"\d+\smin", 60),
(r"\d+\ssec", 1)]
for pattern, multiplier in time_patterns:
t = _regex_find_ints(pattern, time_match)
if t:
total_time += max(t) * multiplier
return round(total_time, 2)
def parse_thumbnails(self):
return None
class KISSlicer(BaseSlicer):
def __init__(self, name="KISSlicer", id_pattern=r";\sKISSlicer"):
super(KISSlicer, self).__init__(name, id_pattern)
def parse_first_layer_height(self):
return self._parse_min_float(
r";\s+first_layer_thickness_mm\s=\s\d.*", self.header_data)
def parse_layer_height(self):
return self._parse_min_float(
r";\s+max_layer_thickness_mm\s=\s\d.*", self.header_data)
def parse_object_height(self):
return self._parse_max_float(
r";\sEND_LAYER_OBJECT\sz.*", self.footer_data)
def parse_filament_total(self):
filament = _regex_find_floats(
r";\s+Ext\s.*mm", self.footer_data, strict=True)
if filament:
return sum(filament)
return None
def parse_estimated_time(self):
time = self._parse_max_float(
r";\sCalculated.*Build\sTime:.*", self.footer_data)
if time is not None:
time *= 60
return round(time, 2)
def parse_thumbnails(self):
return None
class IdeaMaker(BaseSlicer):
def __init__(self, name="IdeaMaker", id_pattern=r"\sideaMaker\s.*,",):
super(IdeaMaker, self).__init__(name, id_pattern)
def parse_first_layer_height(self):
layer_info = _regex_find_floats(
r";LAYER:0\s*.*\s*;HEIGHT.*", self.header_data)
if len(layer_info) >= 3:
return layer_info[2]
return None
def parse_layer_height(self):
layer_info = _regex_find_floats(
r";LAYER:1\s*.*\s*;HEIGHT.*", self.header_data)
if len(layer_info) >= 3:
return layer_info[2]
return None
def parse_object_height(self):
bounds = _regex_find_floats(
r";Bounding Box:.*", self.footer_data)
if len(bounds) >= 6:
return bounds[5]
return None
def parse_filament_total(self):
filament = _regex_find_floats(
r";Material.\d\sUsed:.*", self.header_data, strict=True)
if filament:
return sum(filament)
return None
def parse_estimated_time(self):
return self._parse_max_float(r";Print\sTime:.*", self.footer_data)
def parse_thumbnails(self):
return None
READ_SIZE = 512 * 1024
SUPPORTED_SLICERS = [
PrusaSlicer, Slic3rPE, Slic3r, SuperSlicer,
Cura, Simplify3D, KISSlicer, IdeaMaker]
SUPPORTED_DATA = [
'first_layer_height', 'layer_height', 'object_height',
'filament_total', 'estimated_time', 'thumbnails']
def main(path, filename):
file_path = os.path.join(path, filename)
slicers = [s() for s in SUPPORTED_SLICERS]
log = []
metadata = {}
if not os.path.isfile(file_path):
log.append("File Not Found: %s" % (file_path))
else:
header_data = footer_data = slicer = None
size = os.path.getsize(file_path)
metadata['size'] = size
metadata['modified'] = time.ctime(os.path.getmtime(file_path))
with open(file_path, 'r') as f:
# read the default size, which should be enough to
# identify the slicer
header_data = f.read(READ_SIZE)
for s in slicers:
if re.search(s.get_id_pattern(), header_data) is not None:
slicer = s
break
if slicer is not None:
metadata['slicer'] = slicer.get_name()
if size > READ_SIZE * 2:
f.seek(size - READ_SIZE)
footer_data = f.read()
elif size > READ_SIZE:
remaining = size - READ_SIZE
footer_data = header_data[remaining - READ_SIZE:] + f.read()
else:
footer_data = header_data
slicer.set_data(header_data, footer_data, log)
for key in SUPPORTED_DATA:
func = getattr(slicer, "parse_" + key)
result = func()
if result is not None:
metadata[key] = result
fd = sys.stdout.fileno()
data = json.dumps(
{'file': filename, 'log': log, 'metadata': metadata}).encode()
while data:
try:
ret = os.write(fd, data)
except OSError:
continue
data = data[ret:]
if __name__ == "__main__":
# Parse start arguments
parser = argparse.ArgumentParser(
description="GCode Metadata Extraction Utility")
parser.add_argument(
"-f", "--filename", metavar='<filename>',
help="name gcode file to parse")
parser.add_argument(
"-p", "--path", default=os.path.abspath(os.path.dirname(__file__)),
metavar='<path>',
help="optional absolute path for file"
)
args = parser.parse_args()
main(args.path, args.filename)

108
scripts/install-moonraker.sh Executable file
View File

@ -0,0 +1,108 @@
#!/bin/bash
# This script installs Moonraker on a Raspberry Pi machine running
# Raspbian/Raspberry Pi OS based distributions.
PYTHONDIR="${HOME}/moonraker-env"
# Step 1: Verify Klipper has been installed
check_klipper()
{
if [ "$(systemctl list-units --full -all -t service --no-legend | grep -F "klipper.service")" ]; then
echo "Klipper service found!"
else
echo "Klipper service not found, please install Klipper first"
exit -1
fi
}
# Step 2: Install packages
install_packages()
{
PKGLIST="python3-virtualenv python3-dev nginx"
# Update system package info
report_status "Running apt-get update..."
sudo apt-get update
# Install desired packages
report_status "Installing packages..."
sudo apt-get install --yes ${PKGLIST}
}
# Step 3: Create python virtual environment
create_virtualenv()
{
report_status "Updating python virtual environment..."
# Create virtualenv if it doesn't already exist
[ ! -d ${PYTHONDIR} ] && virtualenv -p /usr/bin/python3 ${PYTHONDIR}
# Install/update dependencies
${PYTHONDIR}/bin/pip install -r ${SRCDIR}/scripts/moonraker-requirements.txt
}
# Step 4: Install startup script
install_script()
{
report_status "Installing system start script..."
sudo cp "${SRCDIR}/scripts/moonraker-start.sh" /etc/init.d/moonraker
sudo update-rc.d moonraker defaults
}
# Step 5: Install startup script config
install_config()
{
DEFAULTS_FILE=/etc/default/moonraker
[ -f $DEFAULTS_FILE ] && return
report_status "Installing system start configuration..."
sudo /bin/sh -c "cat > $DEFAULTS_FILE" <<EOF
# Configuration for /etc/init.d/moonraker
MOONRAKER_USER=$USER
MOONRAKER_EXEC=${PYTHONDIR}/bin/python
MOONRAKER_ARGS="${SRCDIR}/moonraker/moonraker.py"
EOF
}
# Step 4: Start server
start_software()
{
report_status "Launching Moonraker API Server..."
sudo /etc/init.d/klipper stop
sudo /etc/init.d/moonraker restart
sudo /etc/init.d/klipper start
}
# Helper functions
report_status()
{
echo -e "\n\n###### $1"
}
verify_ready()
{
if [ "$EUID" -eq 0 ]; then
echo "This script must not run as root"
exit -1
fi
}
# Force script to exit if an error occurs
set -e
# Find SRCDIR from the pathname of this script
SRCDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )"/.. && pwd )"
# Run installation steps defined above
verify_ready
check_klipper
install_packages
create_virtualenv
install_script
install_config
start_software

View File

@ -0,0 +1,3 @@
# Python dependencies for Moonraker
tornado==6.0.4
pyserial==3.4

55
scripts/moonraker-start.sh Executable file
View File

@ -0,0 +1,55 @@
#!/bin/sh
# System startup script for Moonraker, Klipper's API Server
### BEGIN INIT INFO
# Provides: moonraker
# Required-Start: $local_fs
# Required-Stop:
# X-Start-Before: $klipper
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: moonraker daemon
# Description: Starts the Moonraker daemon
### END INIT INFO
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DESC="moonraker daemon"
NAME="moonraker"
DEFAULTS_FILE=/etc/default/moonraker
PIDFILE=/var/run/moonraker.pid
. /lib/lsb/init-functions
# Read defaults file
[ -r $DEFAULTS_FILE ] && . $DEFAULTS_FILE
case "$1" in
start) log_daemon_msg "Starting moonraker" $NAME
start-stop-daemon --start --quiet --exec $MOONRAKER_EXEC \
--background --pidfile $PIDFILE --make-pidfile \
--chuid $MOONRAKER_USER --user $MOONRAKER_USER \
-- $MOONRAKER_ARGS
log_end_msg $?
;;
stop) log_daemon_msg "Stopping moonraker" $NAME
killproc -p $PIDFILE $MOONRAKER_EXEC
RETVAL=$?
[ $RETVAL -eq 0 ] && [ -e "$PIDFILE" ] && rm -f $PIDFILE
log_end_msg $RETVAL
;;
restart) log_daemon_msg "Restarting moonraker" $NAME
$0 stop
$0 start
;;
reload|force-reload)
log_daemon_msg "Reloading configuration not supported" $NAME
log_end_msg 1
;;
status)
status_of_proc -p $PIDFILE $MOONRAKER_EXEC $NAME && exit 0 || exit $?
;;
*) log_action_msg "Usage: /etc/init.d/moonraker {start|stop|status|restart|reload|force-reload}"
exit 2
;;
esac
exit 0

63
scripts/uninstall-moonraker.sh Executable file
View File

@ -0,0 +1,63 @@
#!/bin/bash
# Moonraker uninstall script for Raspbian/Raspberry Pi OS
stop_service() {
# Stop Moonraker Service
echo "#### Stopping Moonraker Service.."
sudo service moonraker stop
}
remove_service() {
# Remove Moonraker from Startup
echo
echo "#### Removing Moonraker from Startup.."
sudo update-rc.d -f moonraker remove
# Remove Moonraker from Services
echo
echo "#### Removing Moonraker Service.."
sudo rm -f /etc/init.d/moonraker /etc/default/moonraker
}
remove_files() {
# Remove API Key file from older versions
if [ -e ~/.klippy_api_key ]; then
echo "Removing legacy API Key"
rm ~/.klippy_api_key
fi
# Remove API Key file from recent versions
if [ -e ~/.moonraker_api_key ]; then
echo "Removing API Key"
rm ~/.moonraker_api_key
fi
# Remove virtualenv
if [ -d ~/moonraker-env ]; then
echo "Removing virtualenv..."
rm -rf ~/moonraker-env
else
echo "No moonraker virtualenv found"
fi
# Notify user of method to remove Moonraker source code
echo
echo "The Moonraker system files and virtualenv have been removed."
echo
echo "The following command is typically used to remove source files:"
echo " rm -rf ~/moonraker"
}
verify_ready()
{
if [ "$EUID" -eq 0 ]; then
echo "This script must not run as root"
exit -1
fi
}
verify_ready
stop_service
remove_service
remove_files

78
test/client/index.html Normal file
View File

@ -0,0 +1,78 @@
<!DOCTYPE html>
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script src="/js/main.js?v=0.1.16" type="module"></script>
</head>
<body>
<h3>Klippy Web API Test</h3>
<div id="term" style="width: 60em; height: 20em; overflow:auto; border: 1px solid black">
</div>
<br/>
<input id="cbxAuto" type="checkbox" name="Autoscroll" checked="true"/> Autoscroll
<input id="cbxSub" type="checkbox" name="AutoSub" checked="true"/> Subscribe on Ready
<input id="cbxFileTransfer" type="checkbox" checked="true" name="FileEnable"/> Allow File Operations While Printing
<br/><br/>
<input type="radio" name="test_type" value="http" checked="true">Test HTTP API
<input type="radio" name="test_type" value="websocket">Test Websocket API
<br/><br/>
<form id="gcform">
<input type="text" />
<input type="submit" value="Send GCode"/>
</form>
<br/>
<form id="apiform">
<input type="text" style="width: 30em" value="/printer/objects/list"
title="Should be a url for a http request, ie: /printer/objects/list, or a json-rpc registered
method name."/>
<input type="submit" value="Send API Command"/>
<span id="apimethod">
<input type="radio" name="api_cmd_type" value="get" checked="true">GET
<input type="radio" name="api_cmd_type" value="post">POST
<input type="radio" name="api_cmd_type" value="delete">DELETE
</span>
</form>
<br/>
<div style="display: flex">
<input type="file" style="display:none" id="upload-file" />
<div style="width: 10em">
<button id="btnupload" class="toggleable" style="width: 9em">Upload GCode</button><br/><br/>
<button id="btndownload" class="toggleable" style="width: 9em">Download Gcode</button><br/><br/>
<button id="btndelete" class="toggleable" style="width: 9em">Delete Gcode</button><br/><br/>
<button id="btngetmetadata" style="width: 9em">Get Metadata</button>
<a id="hidden_link" href="#" hidden>hidden</a>
</div>
<div style="width: 10em">
<button id="btnstartprint" style="width: 9em">Start Print</button><br/><br/>
<button id="btnpauseresume" style="width: 9em">Pause Print</button><br/><br/>
<button id="btncancelprint" style="width: 9em">Cancel Print</button>
</div>
<div>
<select id="filelist" size="8"></select>
</div>
</div>
<br/>
Progress: <progress id="progressbar" value="0" max="100"></progress>
<span id="upload_progress">0%</span><br/><br/>
<button id="btnqueryendstops" style="width: 9em">Query Endstops</button>
<button id="btnsubscribe" style="width: 9em">Post Subscription</button>
<button id="btngetsub" style="width: 9em">Get Subscription</button>
<button id="btngethelp" style="width: 9em">Get Gcode Help</button>
<button id="btngetobjs" style="width: 9em">Get Object List</button>
<button id="btnsendbatch" class="reqws" style="width: 9em">Test GC Batch</button>
<button id="btnsendmacro" class="reqws" style="width: 9em">Test GC Macro</button>
<br/><br/>
<button id="btnestop" style="width: 9em">E-Stop</button>
<button id="btnrestart" style="width: 9em">Restart</button>
<button id="btnfirmwarerestart" style="width: 9em">Firmware Restart</button>
<button id="btnreboot" style="width: 9em">Reboot OS</button>
<button id="btnshutdown" style="width: 9em">Shutdown OS</button>
<button id="btngetlog" style="width: 9em">Klippy Log</button>
<button id="btnmoonlog" style="width: 9em">Moonraker Log</button>
<br/><br/>
<span id="filename" hidden></span></br>
<div id="streamdiv">
</div>
</body>
</html>

228
test/client/js/json-rpc.js Normal file
View File

@ -0,0 +1,228 @@
// Base JSON-RPC Client implementation
export default class JsonRPC {
constructor() {
this.id_counter = 0;
this.methods = new Object();
this.pending_callbacks = new Object();
this.transport = null;
}
_create_uid() {
let uid = this.id_counter;
this.id_counter++;
return uid.toString();
}
_build_request(method_name, uid, kwargs, ...args) {
let request = {
jsonrpc: "2.0",
method: method_name};
if (uid != null) {
request.id = uid;
}
if (kwargs != null) {
request.params = kwargs
}
else if (args.length > 0) {
request.params = args;
}
return request;
}
register_method(method_name, method) {
this.methods[method_name] = method
}
register_transport(transport) {
// The transport must have a send method. It should
// have an onmessage callback that fires when it
// receives data, but it would also be valid to directly call
// JsonRPC.process_received if necessary
this.transport = transport;
this.transport.onmessage = this.process_received.bind(this)
}
send_batch_request(requests) {
// Batch requests take an array of requests. Each request
// should be an object with the following attribtues:
// 'method' - The name of the method to execture
// 'type' - May be "request" or "notification"
// 'params' - method parameters, if applicable
//
// If a method has no parameters then the 'params' attribute
// should not be included.
if (this.transport == null)
return Promise.reject(Error("No Transport Initialized"));
let batch_request = [];
let promises = [];
requests.forEach((request, idx) => {
let name = request.method;
let args = [];
let kwargs = null;
let uid = null;
if ('params' in request) {
if (request.params instanceof Object)
kwargs = request.params;
else
args = request.params;
}
if (request.type == "request") {
uid = this._create_uid();
promises.push(new Promise((resolve, reject) => {
this.pending_callbacks[uid] = (result, error) => {
let response = {method: name, index: idx};
if (error != null) {
response.error = error;
reject(response);
} else {
response.result = result;
resolve(response);
}
}
}));
}
batch_request.push(this._build_request(
name, uid, kwargs, ...args));
});
this.transport.send(JSON.stringify(batch_request));
return Promise.all(promises);
}
call_method(method_name, ...args) {
let uid = this._create_uid();
let request = this._build_request(
method_name, uid, null, ...args);
if (this.transport != null) {
this.transport.send(JSON.stringify(request));
return new Promise((resolve, reject) => {
this.pending_callbacks[uid] = (result, error) => {
if (error != null) {
reject(error);
} else {
resolve(result);
}
}
});
}
return Promise.reject(Error("No Transport Initialized"));
}
call_method_with_kwargs(method_name, kwargs) {
let uid = this._create_uid();
let request = this._build_request(method_name, uid, kwargs);
if (this.transport != null) {
this.transport.send(JSON.stringify(request));
return new Promise((resolve, reject) => {
this.pending_callbacks[uid] = (result, error) => {
if (error != null) {
reject(error);
} else {
resolve(result);
}
}
});
}
return Promise.reject(Error("No Transport Initialized"));
}
notify(method_name, ...args) {
let notification = this._build_request(
method_name, null, null, ...args);
if (this.transport != null) {
this.transport.send(JSON.stringify(notification));
}
}
process_received(encoded_data) {
let rpc_data = JSON.parse(encoded_data);
if (rpc_data instanceof Array) {
// batch request/response
for (let data of rpc_data) {
this._validate_and_dispatch(data);
}
} else {
this._validate_and_dispatch(rpc_data);
}
}
_validate_and_dispatch(rpc_data) {
if (rpc_data.jsonrpc != "2.0") {
console.log("Invalid JSON-RPC data");
console.log(rpc_data);
return;
}
if ("result" in rpc_data || "error" in rpc_data) {
// This is a response to a client request
this._handle_response(rpc_data);
} else if ("method" in rpc_data) {
// This is a server side notification/event
this._handle_request(rpc_data);
} else {
// Invalid RPC data
console.log("Invalid JSON-RPC data");
console.log(rpc_data);
}
}
_handle_request(request) {
// Note: This implementation does not fully conform
// to the JSON-RPC protocol. The server only sends
// events (notifications) to the client, and it is
// not concerned with client-side errors. Thus
// this implementation does not attempt to track
// request id's, nor does it send responses back
// to the server
let method = this.methods[request.method];
if (method == null) {
console.log("Invalid Method: " + request.method);
return;
}
if ("params" in request) {
let args = request.params;
if (args instanceof Array)
method(...args);
else if (args instanceof Object) {
// server passed keyword arguments which we currently do not support
console.log("Keyword Parameters Not Supported:");
console.log(request);
} else {
console.log("Invalid Parameters");
console.log(request);
}
} else {
method();
}
}
_handle_response(response) {
if (response.result != null && response.id != null) {
let uid = response.id;
let response_finalize = this.pending_callbacks[uid];
if (response_finalize != null) {
response_finalize(response.result);
delete this.pending_callbacks[uid];
} else {
console.log("No Registered RPC Call for uid:");
console.log(response);
}
} else if (response.error != null) {
// Check ID, depending on the error it may or may not be available
let uid = response.id;
let response_finalize = this.pending_callbacks[uid];
if (response_finalize != null) {
response_finalize(null, response.error);
delete this.pending_callbacks[uid];
} else {
console.log("JSON-RPC error recieved");
console.log(response.error);
}
} else {
console.log("Invalid JSON-RPC response");
console.log(response);
}
}
}

1297
test/client/js/main.js Normal file

File diff suppressed because it is too large Load Diff