xpra icon
Bug tracker and wiki

Opened 2 weeks ago

Last modified 2 days ago

#1950 new enhancement

move content guesser data to a user editable file

Reported by: Antoine Martin Owned by: J. Max Mena
Priority: major Milestone: 2.4
Component: encodings Version: 2.3.x
Keywords: Cc:

Description (last modified by Antoine Martin)

Move the hard-coded definitions added in r17665 (#1699).

The default should go in /usr/share/xpra, but we should support per-user overrides. (~/local/share/xpra)

The file should support regular expressions.
So we can match:

  • gmail and ensure this is tagged as "browser":
    _NET_WM_NAME(UTF8_STRING) = "Inbox (7) - email@somedomain.com - Gmail - Google Chrome"
    
  • youtube as "video":
    WM_NAME(UTF8_STRING) = "(9) "SomeTitle - YouTube - Google Chrome"
    

etc

Attachments (1)

30_title.conf (961 bytes) - added by J. Max Mena 3 days ago.
revised 30_title.conf

Download all attachments as: .zip

Change History (6)

comment:1 Changed 2 weeks ago by Antoine Martin

Description: modified (diff)
Owner: changed from Antoine Martin to J. Max Mena

Done in r20364 + r20365.

We ship a set of default configuration files and users can add their own.

@maxmylyn: we should improve those definitions to cover more of the top websites (see r20365 for format - those are python regular expressions), you can find the values to match using xpra info or just using xprop on the browser window.

We should take advantage of this hint more: #1952.

Last edited 2 weeks ago by Antoine Martin (previous) (diff)

comment:2 Changed 2 weeks ago by J. Max Mena

I can take a stab at this, but it's going to be very very time-consuming.

I'll bet we can find a shortcut somewhere by using other application's compatibility lists. That is, we could find a list of youtube-dl's supported sites and put those into a giant list. Or we could do something similar for a list of all news sites, etc etc. And that's just for a website list, and that doesn't even begin to tackle the thousands of desktop applications.

comment:3 Changed 13 days ago by Antoine Martin

I can take a stab at this, but it's going to be very very time-consuming.

No-one is asking you to catalogue every app or every website.
Focusing on the top 10 or top 20 can already give you 80% of page views / application used.

we could find a list of youtube-dl's supported sites and put those into a giant list

Not sure what "youtube-dl" means here.

that doesn't even begin to tackle the thousands of desktop applications.

Again, the top 10 is what matters, a generic solution is now in #1956

comment:4 Changed 3 days ago by J. Max Mena

Owner: changed from J. Max Mena to Antoine Martin

I did a first-pass using https://moz.com/top500 as guidance, I'll attach my 30_title.conf for some feedback as I'm still not 100% sure about it. I think I've got a handle on how the Python Regex works, but Regex really is not a strong suit of mine. At all..

Not sure what "youtube-dl" means here.

youtube-dl is a utility that can download videos from thousands of supported sites. I was theorizing we could take its compatibility list as a shortcut for grabbing nearly every major video site on the internet. But, that would also include hundreds of adult websites, so I'm not sure how we as a project feel about that. (I know I don't care, but I'm not the Xpra project)

But, more importantly, that would also catch a number of sites that should be primarily defined as picture sites rather than video.

Changed 3 days ago by J. Max Mena

Attachment: 30_title.conf added

revised 30_title.conf

comment:5 Changed 2 days ago by Antoine Martin

Owner: changed from Antoine Martin to J. Max Mena

I'm still not 100% sure about it

  • it says "top 20", but the half of those sites are above that header line - just drop it altogether, it will change over time anyway
  • no need to repeat "a no-go for the same reason as" for every site this applies to, just group them together in a commented out section
  • comments were not supported unless the whole line started with "#" (r20498 changes that) so I assume that you didn't actually test any of this? (ie: verifying the content-type for the window with "xpra info")

but Regex really is not a strong suit of mine

Here is a one liner:

python -c 'import re;print(bool(re.search("- Gmail -", "Inbox - Gmail - Whatever")))'

The upstream doc is here: python 2.7 re module

that would also include hundreds of adult websites

That's not really a problem.

also catch a number of sites that should be primarily defined as picture sites rather than video

But this is.

And it would make the file unnecessarily long. Let's keep it small and simple.

What content types are allowed?

It's free text.

I see text, picture, and video,

Those are the only 3 content-types that the encoding heuristics currently know about.

but are there more?

No, but we can add more if needed.

As in, what's the complete list of acceptable content types for sites.

FYI: content-types are for window content, sites happen to be a type of window content seen through a browser.

For example, a site like Facebook and Instagram have both pictures and videos - so assigning one or the other will have an impact on the other type of content that appears thereon. Something like "mixed" is probably the right way to classify them.

No.
Use text if the priority is to get crisp text quickly, use picture or video if the text content is secondary. You should still get crisp text, just sometimes not as quickly, and the graphical content will compress and look better.
So I would use "text" for facebook and "picture" for instagram.

Last edited 2 days ago by Antoine Martin (previous) (diff)
Note: See TracTickets for help on using tickets.