Xpra: Ticket #2261: Dynamic Proxy Auth

I'd like to implement a docker setup where I have a proxy server with X number of xpra servers running behind it (scaled by replication). The proxy server would have a list of the ip's for each of the servers. When a user logs in, the user would be (dynamically) mapped to an ip from the list.

In docker this should be reasonably easy to implement as the list of ip's is available through dns lookup of tasks.${servicename}.

I think most of this can be implemented as a combination of sys and multifile auth. However I see an issue where I'd like the proxy to passthrough the password (challenge) to the server as for security I don't want the password stored on the proxy or no auth on the server. Is there anyway this can be achieved?

There would also have to be a way of recycling ip's when servers timeout or shutdown but I think that's also no so hard to implement. as the containers could just shutdown and a new container would spawn with new ip, then all that's left is to remove mappings where the ip no longer exists in the list.

An alternative approach would be to use a reverse proxy (nginx or similar) to do the work of the xpra proxy server. But in this case we would be forced to use websocket connection and would need to pass the user in a http header or something similar. And then implement the user ip mapping as describe above. But I think prefer an Xpra auth solution.



Mon, 08 Apr 2019 08:59:31 GMT - Antoine Martin: owner changed

See also: #2125

I think most of this can be implemented as a combination of sys and multifile auth.

multifile is going to be deprecated, use sqliteauth instead. (it is better in every way)

I'd like the proxy to passthrough the password (challenge) to the server as for security I don't want the password stored on the proxy

We could forward authentication requests to the client. OTOH, it should work and I don't see why we're not doing it already.

There would also have to be a way of recycling ip's when servers timeout or shutdown

With auto-registration, via mdns or other technique, the proxy could keep a list of available servers.

I would really like to get this into the 3.0 release cycle.


Mon, 08 Apr 2019 09:19:31 GMT - Mark Harkin:

See also: #2125

Yes, that looks like a more generic solution, than the docker specific one metioned.

I think most of this can be implemented as a combination of sys and multifile auth.

multifile is going to be deprecated, use sqliteauth instead. (it is better in every way)

Yes, I was leaning towards using no file/db and storing the server list and user/server mapping in memory but sqlite db would work also.

I would really like to get this into the 3.0 release cycle.

Great to hear, #2125 and passthrough auth would be the majority of the work here, server recycling should be relatively easy after that.


Mon, 08 Apr 2019 09:21:28 GMT - Antoine Martin: owner, status changed

passthrough auth would be the majority of the work here

Lemme take care of that in the next few weeks.

server recycling should be relatively easy after that.

Ideally, the mdns method can be made generic enough: a sort of server registry that can be manipulated via mdns or whatever backend we want to add later.


Sun, 14 Apr 2019 05:25:03 GMT - Antoine Martin: owner, status changed

Updates:

This works for me - tested with both python2 and python3, run all the commands as the same user:


Sun, 14 Apr 2019 08:22:00 GMT - Mark Harkin:

I think I'm missing something in your example. Doesn't the proxy just create a server on a new display? what would bind it to use :20 ?

Thanks.


Sun, 14 Apr 2019 08:55:47 GMT - Antoine Martin:

Doesn't the proxy just create a server on a new display? what would bind it to use :20 ?

If there is only one existing session available for the user, that's the one that will be selected.


Sun, 14 Apr 2019 09:29:26 GMT - Mark Harkin:

If there is only one existing session available for the user, that's the one that will be selected.

I'm having trouble replicating that setup, I'll keep working on it but assuming this should also work for a proxy spawned server I'm getting the following error using html client:

2019-04-14 09:16:40,950 Warning: client expects an authentication challenge,
2019-04-14 09:16:40,950  sending a fake one
2019-04-14 09:16:41,585 New unix-domain connection received on /run/user/1000/xpra/2c4c0fa210d0-0
2019-04-14 09:16:42,035 New unix-domain connection received on /run/user/1000/xpra/2c4c0fa210d0-0
2019-04-14 09:16:42,085 New unix-domain connection received on /run/user/1000/xpra/2c4c0fa210d0-0
2019-04-14 09:16:42,883 Handshake complete; enabling connection
2019-04-14 09:16:42,895 Error setting up new connection for
2019-04-14 09:16:42,895  Protocol(unix-domain socket:/run/user/1000/xpra/2c4c0fa210d0-0):
2019-04-14 09:16:42,895  client failed to specify any supported encodings
2019-04-14 09:16:42,895 Disconnecting client Protocol(unix-domain socket:/run/user/1000/xpra/2c4c0fa210d0-0):
2019-04-14 09:16:42,895  server error (client failed to specify any supported encodings)

Sun, 14 Apr 2019 11:08:43 GMT - Antoine Martin:

2019-04-14 09:16:42,895 server error (client failed to specify any supported encodings)

You're seeing this when the client is expecting the server to send a challenge but it doesn't send one.


Tue, 16 Apr 2019 16:47:43 GMT - Mark Harkin:

I'm guessing the proxy uses dbus to find the server, I can't do this in a docker container and haven't got a decent setup outside so will probably hold off on testing this until after #2125.

After looking into the mdns setup and #2125. The functionality I think is needed is something like:


Thu, 02 May 2019 14:45:00 GMT - Antoine Martin:

I'm guessing the proxy uses dbus to find the server

No, it uses code similar to xpra list, running as the unix user that authenticated.

servers report active and max no. of connections through mdns

That's not very suitable: mdns is used to expose the connection point for individual sessions

proxy selects server with least active connections and rejects if all servers are at maximum.

if you want some kind of load balancing, that's harder


Thu, 02 May 2019 14:56:15 GMT - Mark Harkin:

Replying to Antoine Martin:

I'm guessing the proxy uses dbus to find the server

No, it uses code similar to xpra list, running as the unix user that authenticated.

I'll look at it again, must have been doing something wrong.

servers report active and max no. of connections through mdns

That's not very suitable: mdns is used to expose the connection point for individual sessions

Yeah, sorry I was thinking what a "system wide proxy server" might report. For my use case I would be using only individual sessions but the proxy would still need to know which sessions are already in use by other users.

proxy selects server with least active connections and rejects if all servers are at maximum.

if you want some kind of load balancing, that's harder

User would only connect to a session that isn't already in use, so this would be easier.


Thu, 02 May 2019 16:03:06 GMT - Antoine Martin: owner, status changed

For my use case I would be using only individual sessions but the proxy would still need to know which sessions are already in use by other users.

Very good point, one that I had completely missed. I'll need to look into #2187 earlier than planned.

User would only connect to a session that isn't already in use, so this would be easier.

That can be done.


Fri, 03 May 2019 07:44:50 GMT - Mark Harkin:

Not sure what I'm doing wrong for the test case, running both server and proxy as root in a docker container. Can connect to the server directly if I bind it to a port:

/usr/bin/xpra start :20 -d all --no-daemon --auth=sys --daemon=no --html=yes --dbus-control=no &
/usr/bin/xpra proxy :10000 --no-daemon --bind-tcp=0.0.0.0:10000 --tcp-auth=none -d auth,proxy --dbus-control=no

Sockets are created in /run/xpra:

srw-rw---- 1 root xpra  0 May  3 07:37 6965c014ca64-10000
srw-rw---- 1 root xpra  0 May  3 07:37 6965c014ca64-20

Proxy isn't finding the session on display :20 but something is able to probe the socket with the touch_sockets() function:

centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,919 all authentication modules passed
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,920 none.get_sessions() uid=1000, gid=10
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,922 sockdir=DotXpra(/run/user/1000/xpra, ['/run/user/1000/xpra', '/run/xpra'] - 1000:10 - xpra), results=[], displays=[]
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,922 none.get_sessions()=(1000, 10, [], {}, {})
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,922 proxy_auth none.get_sessions()=(1000, 10, [], {}, {})
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,922 proxy_auth(WebSocket(ws socket: 172.24.0.2:10000 <- 172.24.0.1:57684), {..}, None) found sessions: (1000, 10, [], {}, {})
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,922 username(1000)=xpra, groups=[]
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,923 proxy_session: displays=[], start_sessions=False, start-new-session={}
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,923 disconnect(session not found error, ('no displays found',))
centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:23:19,242 touch_sockets() unix socket paths=['/run/xpra/6965c014ca64-20']

Fri, 03 May 2019 08:52:13 GMT - Antoine Martin:

running both server and proxy as root in a docker container

Probably because you're running as root:

centos-xpra_1_5aefcf3f9f14 | 2019-05-03 07:22:32,920 none.get_sessions() uid=1000, gid=10

It is trying to locate sessions owned by uid=1000.


Fri, 03 May 2019 09:42:22 GMT - Mark Harkin:

Ok think I have it setup correctly now. Python client connects no problem but HTML client fails to connect with the following in server log:

centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:38:53,256 process_server_packet: challenge
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:38:53,256 password from {'display_name': ':50', 'uid': 1000, 'type': 'unix-domain', 'socket_path': '/run/user/1000/xpra/450f4e8074d3-50', 'socket_dirs': ['/run/user/$UID/xpra', '/run/xpra'], 'gid': 10, 'local': True, 'display': ':50'} / {} = None
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:38:53,256 queueing client packet: challenge
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:38:53,256 sending to client: challenge (queue size=0)
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:38:58,250 run_queue() <bound method ProxyInstanceProcess.timeout_repeat_call of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>[3, 5000, <bound method ProxyInstanceProcess.timeout_video_encoders of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>, (), {}]{}
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:38:58,251 run_queue() size=0
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:03,251 run_queue() <bound method ProxyInstanceProcess.timeout_repeat_call of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>[3, 5000, <bound method ProxyInstanceProcess.timeout_video_encoders of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>, (), {}]{}
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:03,252 run_queue() size=0
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:08,252 run_queue() <bound method ProxyInstanceProcess.timeout_repeat_call of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>[3, 5000, <bound method ProxyInstanceProcess.timeout_video_encoders of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>, (), {}]{}
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:08,253 run_queue() size=0
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:13,253 run_queue() <bound method ProxyInstanceProcess.timeout_repeat_call of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>[3, 5000, <bound method ProxyInstanceProcess.timeout_video_encoders of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>, (), {}]{}
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:13,254 run_queue() size=0
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,070 run_queue() <bound method ProxyInstanceProcess.idle_repeat_call of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>(4, <bound method ProxyInstanceProcess.process_client_packet of <ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, started)>>, (WebSocket(None), ['connection-lost']), {}){}
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,070 process_client_packet: connection-lost
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,071 stop(WebSocket(None), ('client connection lost',))
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,072 stopping proxy instance pid 786:
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,072  client connection lost
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,073 removing socket /run/user/1000/xpra/450f4e8074d3-proxy-786
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,073 sending disconnect to Protocol(unix-domain socket:  <- /run/user/1000/xpra/450f4e8074d3-50)
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,073 waiting for network connections to close
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,174 proxy instance 786 stopped
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,174 ProxyProcess.run() ending 786
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,179 reap(<multiprocessing.forking.Popen object at 0x7fca6c0839d0>,)
centos-xpra_1_aae6d9cdfff1 | 2019-05-03 09:39:18,180 reap(<multiprocessing.forking.Popen object at 0x7fca6c0839d0>,) dead processes: [<ProxyInstanceProcess(ws socket: 172.28.0.2:10000 <- 172.28.0.1:54956, stopped)>]

Tue, 07 May 2019 05:12:36 GMT - Antoine Martin: owner, status changed

Ah, with the html5 client. The javascript console shows me:

Uncaught Error: Unknown hash algorithm "sha512"
    at Object.ctx.start (forge.js:9194)
    at XpraClient._gendigest (Client.js:1909)
    at XpraClient._process_challenge (Client.js:1886)
    at XpraProtocolWorkerHost.XpraClient._route_packet [as packet_handler] (Client.js:501)
    at Worker.<anonymous> (Protocol.js:47)

That's because the authentication is being forwarded to the html5 client, but it's using the capabilities which were supplied by the proxy server.. The html5 client didn't handle sha512: https://github.com/digitalbazaar/forge library to the latest version in r22650 and things work fine now.

A more correct solution would be to figure out in advance if the proxy will be handling the authentication itself or if it will forward the challenge to the client, and set the authentication capabilities accordingly. But that's just a lot harder than making sure that both proxy and client have the same capabilities.

Note: the python client can support multiple authentication requests, asking the user via a dialog if necessary, whereas the html5 client only has support for a single username+password input. So unless they are using identical values, only the proxy or the server can use authentication.. not both.

@mjharkin: does that work for you?


Tue, 07 May 2019 06:50:08 GMT - Mark Harkin:

@mjharkin: does that work for you?

Yes, works for me. I should have looked for the error client side.

Not sure how you were thinking on the mdns side of things but here's my thoughts: if the sessions also report the uid of the connected user then this would allow for resuming disconnected sessions. There could then be an "mdns" auth that is basically functions the same as "none" but uses the mdns sessions instead of system sessions. If a session with a uid doesn't exist then it would connect to an empty session (root uid 0). This could also work for (a single layer) of system proxy servers behind the root proxy if they report the list of uid's. Later on load balancing would be trivial then based on number of uid's and could be enhanced by reporting the max users per system proxy through mdns also.

Not sure how often mdns info is sent but for this to work it would have to be every time a session is connected/disconnected/timedout.


Wed, 08 May 2019 05:35:29 GMT - Antoine Martin:

if the sessions also report the uid of the connected user then this would allow for resuming disconnected sessions

Which uid? uids are not portable across systems

Not sure how often mdns info is sent but for this to work it would have to be every time a session is connected/disconnected/timedout.

It is dynamic.


Wed, 08 May 2019 05:43:29 GMT - Mark Harkin:

Replying to Antoine Martin:

if the sessions also report the uid of the connected user then this would allow for resuming disconnected sessions

Which uid? uids are not portable across systems

Ah yes, I was assuming that there would be an external managment of users (ldap) which would keep uid's in sync and then using pam/sys auth on the endpoints. I guess if it was to work without this then usernames would have to be used. Maybe that's cleaner also with no uid lookup on the proxy.


Tue, 14 May 2019 16:55:16 GMT - Antoine Martin:

The mdns option didn't pan out (details in #2187), so now I am looking at something closer to #2125 - I will update that ticket instead, feel free to subscribe to it.

AFAICT, the only thing that can be done to improve things here is #1796 for the html5 client: it would be nice to be able to provide different authentication credentials for the proxy and the server, but I'm not sure how to present that without seeing a proliferation of text input fields.

@mjharkin: In the meantime, I think we can close this ticket?


Tue, 14 May 2019 17:18:36 GMT - Mark Harkin:

@mjharkin: In the meantime, I think we can close this ticket?

Yes, closing and will subscribe to #2125


Tue, 14 May 2019 17:19:00 GMT - Mark Harkin: status changed; resolution set


Sat, 23 Jan 2021 05:46:29 GMT - migration script:

this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/2261