It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
./play.it 2.33.3 bugfix release

./play.it 2.33.3 bugfix update got released, it is available from http://downloads.dotslashplay.it/releases/2.33.3/

The following fixes and improvements are included in this update:
* Drop obsolete function application_exe_escaped.
* Reduce code duplication when fetching the type of the default application.
* Remove shebang from fish completion file.
* Fix config file path in manpage and zsh-completion.
* Add --list-packages option to zsh completion.
* Add support for multiple Qt 5 native libraries:
- libQt5Core.so.5
- libQt5Gui.so.5
- libQt5Widgets.so.5
We are going to update the packages provided by Debian right after posting this news article, so Debian unstable users should expect to have access to ./play.it 2.33.3 later today, or maybe tomorrow. Users of other distributions should, as usual, get in touch with their maintainers to get updated packages.

git.dotslashplay.it is back!

After a lot of time and energy spent fighting back the pillaging bots used to prepare generative AI services, these finally gave up (well, the ones that are not still hitting our server but constantly fed a stream of garbage pseudo-text). So Web access to ./play.it git repositories is once again available at the canonical URL: http://git.dotslashplay.it/
avatar
vv221: Yes, and it would actually be quite easy to build these two lists.
Thank you. That sounds easy enough.

By the way, http://git.dotslashplay.it/ sends my browser (LibreWolf 142) to the never-ending literary black hole.
Either your bot detection is not perfect or I ought to do a Voight-Kampff test.
avatar
Gede: By the way, http://git.dotslashplay.it/ sends my browser (LibreWolf 142) to the never-ending literary black hole.
Either your bot detection is not perfect or I ought to do a Voight-Kampff test.
It is not a detection problem, it means your IP provider played an active role in attacking my server, and ended up in the list of blackholed providers for that reason.

Please tell me what IP(s) you use to access ./play.it-related websites, and I can add exceptions for these.
Post edited September 30, 2025 by vv221
./play.it 2.34.0 first release candidate

A first release candidate is ready for the upcoming ./play.it 2.34.0 feature update: http://git.dotslashplay.it/play.it/?h=release/2.34.0

A couple highlights from this upcoming relase follow:

New .gpkg.tar package format for Gentoo
Generation of packages using ebuild is replaced with .gpkg.tar packages, relying only on standard tools that are not specific to Gentoo. This gives the ability to build packages for Gentoo from any system able to run ./play.it.

Improved support for Web games
Most games developed in JavaScript come with an embedded copy of Google Chrome that is used as some kind of virtual machine. With this update the Google Chrome copy is dropped and the games are run from the system-provided Firefox instead.

Improved support for Unreal Engine 3 games
Some tweaks are applied to all Unreal Engine 3 games, to prevent input problems when running Windows builds through WINE.

Improved support for GameMaker games
In addition to most of the game properties now having implicit default values, making support for extra GameMaker games much easier to add, some tweaks are applied to these games to avoid a crash on Mesa and to avoid broken behaviours on non-USA locales.

---

The full changelog for this new release includes more stuff, you can read the full release notes here: http://git.dotslashplay.it/play.it/tree/CHANGELOG?h=release/2.34.0

The plan for this update is to gather your feedback, using the usual contact methods, and to finally release it once it has spent one full month with no new bug reported.
First release candidate of 2.33.4 bugfix update

./play.it 2.33.4 bugfix update first release candidate is ready: http://git.dotslashplay.it/play.it/?h=release/2.33.4

It includes a single fix, specific to Debian, contributed by Bernd Schumacher:
* Debian: Fix dependency on libQt5Core.so.5 / libQt5Gui.so.5.
Please report any problem you might find with this proposed update, or any other problem that you know about but has not been fixed yet. After one full week with no bug reported, we are going to release this bugfix update and patch the 2.34.0 release candidate to include it.
Post edited October 14, 2025 by vv221
Second release candidate of 2.33.4 bugfix update

./play.it 2.33.4 bugfix update has a new release candidate: http://git.dotslashplay.it/play.it/?h=release/2.33.4

In addition to the fix contributed by Bernd Schumacher, it now fixes the package context support of a couple functions:
* Fix package context setting from several functions:
- icons_inclusion_single_application
- content_inclusion
* Debian: Fix dependency on libQt5Core.so.5 / libQt5Gui.so.5.
These extra changes should improve the support we provide for Divinity: Original Sin 2.

Please report any problem you might find with this proposed update, or any other problem that you know about but has not been fixed yet. After one full week with no bug reported, we are going to release this bugfix update and patch the 2.34.0 release candidate to include it.
Post edited October 14, 2025 by vv221
./play.it 2.33.4 bugfix release

The 2.33.4 bugfix update got released: http://downloads.dotslashplay.it/releases/2.33.4/

Here is the list of fixes it includes:
* Fix package context setting from several functions:
- icons_inclusion_single_application
- content_inclusion
* Debian: Fix dependency on libQt5Core.so.5 / libQt5Gui.so.5.
More importantly, this is the first release including contributions to the core library by Bernd Schumacher, who we are happy to welcome into our team!

As usual, updated packages should be available soon in the Debian archive. For other distributions you should get in touch with your maintainers.

---

./play.it 2.34.0 second release candidate

A new release candidate is ready for the upcoming ./play.it 2.34.0 feature update: http://git.dotslashplay.it/play.it/?h=release/2.34.0

A couple highlights from this upcoming release follow:

New .gpkg.tar package format for Gentoo
Generation of packages using ebuild is replaced with .gpkg.tar packages, relying only on standard tools that are not specific to Gentoo. This gives the ability to build packages for Gentoo from any system able to run ./play.it.

Improved support for Web games
Most games developed in JavaScript come with an embedded copy of Google Chrome that is used as some kind of virtual machine. With this update the Google Chrome copy is dropped and the games are run from the system-provided Firefox instead.

Improved support for Unreal Engine 3 games
Some tweaks are applied to all Unreal Engine 3 games, to prevent input problems when running Windows builds through WINE.

Improved support for GameMaker games
In addition to most of the game properties now having implicit default values, making support for extra GameMaker games much easier to add, some tweaks are applied to these games to avoid a crash on Mesa and to avoid broken behaviours on non-USA locales.

---

The full changelog for this new release includes more stuff, you can read the full release notes here: http://git.dotslashplay.it/play.it/tree/CHANGELOG?h=release/2.34.0

The plan for this update is to gather your feedback, using the usual contact methods, and to finally release it once it has spent one full month with no new bug reported.
Post edited October 21, 2025 by vv221
./play.it 2.34.0 third release candidate

A new release candidate is ready for the upcoming ./play.it 2.34.0 feature update: http://git.dotslashplay.it/play.it/?h=release/2.34.0

Compared to the previous release candidate, it includes a fix to the dependencies_list_native_libraries compatibility wrapper.

---

The full changelog for this new release includes more stuff, you can read the full release notes here: http://git.dotslashplay.it/play.it/tree/CHANGELOG?h=release/2.34.0

The plan for this update is to gather your feedback, using the usual contact methods, and to finally release it once it has spent one full month with no new bug reported.
avatar
vv221: Yes, and it would actually be quite easy to build these two lists.
avatar
Gede: Thank you. That sounds easy enough.

By the way, http://git.dotslashplay.it/ sends my browser (LibreWolf 142) to the never-ending literary black hole.
Either your bot detection is not perfect or I ought to do a Voight-Kampff test.
Ah is that what it is. Me too, tried to viw the download pages and got the same gibberish. From both Firefox (ad blocker and the works), and Edge (no adblocker but strict privacy settings).
avatar
richardjmoss: (…)
If you tell me the IP you get that problem with, I can check if it is from an ISP I would be willing to unblock.

If for some reason you don’t want to share your IP here (it’s not a private information: mine are 89.234.186.75 and 2a00:5884:8300::1), you can send it to me by e-mail. Or, if you know it, give me directly you ISP AS number. I’m going to use the IP only to find the ISP AS number anyway.
avatar
vv221: It is not a detection problem, it means your IP provider played an active role in attacking my server, and ended up in the list of blackholed providers for that reason.

Please tell me what IP(s) you use to access ./play.it-related websites, and I can add exceptions for these.
I'm afraid I don't seem to notice the forum notifications.

This is why we cannot have nice things. So many people complain about this.
We've had web crawlers for long. Why is this a problem now? Is it caused by too many crawlers? The dynamic nature of websites that make each call more "expensive"?
Bot detection and captchas are so pervasive now!

If I am understanding you correctly, You block the entire range and keep a perpetual grudge against it. I know these people have not been nice to you, but I am afraid that you may be hurting yourself too in this way. Would a really lazy response time, bordering on a timeout, help?

Still, I am curious if you have any not-smart crawler persisting in that web of yours ad infinitum.

Thank you for kindly considering me for access. Please add 2e.bd.c6.bc.
avatar
Gede: (…)
Crawlers used by LLM enterprises send literally millions times more requests than the ones used before them, by search engine enterprises. This is a very real problem that caused the disappearance of many small websites that could simply not bear the load. Before I started blocking them, they amounted for more than 99,5% of the trafic hitting my server.

I use neither bots detection here, nor captchas. I (very strongly) dislike these systems, and am not going to adopt them. Blocking on my server is done only based on manual logs analysis, based on bad actors who actually attacked my server already. Nothing is automated, nothing is blocked without me explicitly "giving the order".

I block only the ISP providing IPs used to attack my server. That’s a tiny fraction of all ISP out there. I unblock most of the ones that are providing access to legit users too, if said users request it (many actually did not request an unblock, telling me they would be using another more respectful IP provider instead).

---

avatar
Gede: Still, I am curious if you have any not-smart crawler persisting in that web of yours ad infinitum.
Yes, the crawler from Anthropic, used to power the LLM nicknamed "Claude", sends two millions daily requests in this trap. It has been doing so for months, and it does not seem like it is going to stop anytime soon.

---

avatar
Gede: Thank you for kindly considering me for access. Please add 2e.bd.c6.bc.
I don’t know what this is, but clearly not a valid public IP ;)

You can get your current IP in many ways, an easy one is the following command:
curl ifconfig.pro
Post edited October 23, 2025 by vv221
avatar
vv221: Crawlers used by LLM enterprises send literally millions times more requests than the ones used before them, by search engine enterprises.
And they are doing so because of their arms race to acquire the largest training set and not trying to co-exist in a mutual cooperative (or symbiotic) relation with the websites?

Here is the bit I am having trouble understanding: once a crawler has mapped your website, I expect them to direct their attention to some other website, and only return to yours after some time, for updates.
That would mean that, excluding some CMS linking tricks that may be confusing, the negative impact that a crawler would cause would be proportional to the number of pages you host, times the number of crawlers that reach them (which I would estimate to be in the lower hundreds). And yet people describe these crawlers as bulls in china shops. Why? Don't they just go away after one traversal of the website?

avatar
vv221: Blocking on my server is done only based on manual logs analysis, based on bad actors who actually attacked my server already. Nothing is automated, nothing is blocked without me explicitly "giving the order".
I am glad that you are able to handle (or tolerate) the reaction time that comes with the human scalability limit. I do not enjoy the automatic route myself (we end-up having automatisms fighting automatisms), but I can understand why they are put there.

avatar
vv221: I block only the ISP providing IPs used to attack my server. That’s a tiny fraction of all ISP out there. I unblock most of the ones that are providing access to legit users too, if said users request it (many actually did not request an unblock, telling me they would be using another more respectful IP provider instead).
I find this curious, so I am trying to wrap my head around it. As far as I know, my IP comes from my ISP, but it could also be reaching your website with an IP set from some sort of NAT system, such as a VPN, but I cannot think of any more. I can see people changing VPN providers, but ISP, that is a decision of greater consequences.
Would a nasty crawler operator use a VPN? Oh, for sure I can see that happening! But using my ISP to attack your website I find something more unexpected.

avatar
Gede: Thank you for kindly considering me for access. Please add 2e.bd.c6.bc.
avatar
vv221: I don’t know what this is, but clearly not a valid public IP ;)
Oh, my half-paranoid side thought it would be safer to hex-encoding it. I told him that would not be obvious and that a small not should be added to explain it, but it was late, the intention got lost and it was not added to the post. Apologies.

avatar
vv221: Or, if you know it, give me directly you ISP AS number.
I keep getting surprised at the amount of stuff that you know! I had no idea there was an AS number, nor am I understanding how it works! I thought this was all done with netmaks, routing tables and gateways. :-)

Anyway, I think my ASN is "AS12353". I am sorry for the trouble.
avatar
Gede: And they are doing so because of their arms race to acquire the largest training set and not trying to co-exist in a mutual cooperative (or symbiotic) relation with the websites?

Here is the bit I am having trouble understanding: once a crawler has mapped your website, I expect them to direct their attention to some other website, and only return to yours after some time, for updates.
That would mean that, excluding some CMS linking tricks that may be confusing, the negative impact that a crawler would cause would be proportional to the number of pages you host, times the number of crawlers that reach them (which I would estimate to be in the lower hundreds). And yet people describe these crawlers as bulls in china shops. Why? Don't they just go away after one traversal of the website?
It’s all about pillage, not cooperation. They don’t care that they are bringing small websites to their knees. They are known to even scan the same pages endlessly just in case something changed in that page in the last minute.

Hey, I’d go one step further: they *want* that to happen. Because, guess who is selling hosting services including protection against that kind of attacks? That’s right, the exact same enterprises that are behind these attacks.

---

avatar
Gede: I find this curious, so I am trying to wrap my head around it. As far as I know, my IP comes from my ISP, but it could also be reaching your website with an IP set from some sort of NAT system, such as a VPN, but I cannot think of any more. I can see people changing VPN providers, but ISP, that is a decision of greater consequences.
Would a nasty crawler operator use a VPN? Oh, for sure I can see that happening! But using my ISP to attack your website I find something more unexpected.
My sample is biased because many people getting in touch with me are computing-related professionals, with access to several networks. So they do not switch ISPs when blocked, they simply use another of the networks they already have access to.

About your ISP being used for attacks, it’s because most LLM enterprises do not use their own hardware to run Web scans. Instead they go through malware installed mostly as Web browser extensions, or Android/iOS applications. So they zombify people smartphones to include them in massive distributed botnets, that are in turn use to attack/scan the Web while the people profiting from this stay cowardly hidden.

---

avatar
Gede: Anyway, I think my ASN is "AS12353". I am sorry for the trouble.
That checks with your location and my list of blocked actors. I unblocked them and added a note to make sure I do not block them again

Feel free to send me a ping if you happen to encounter the fake text again.
./play.it 2.34.0 fourth release candidate

A new release candidate is ready for the upcoming ./play.it 2.34.0 feature update: http://git.dotslashplay.it/play.it/?h=release/2.34.0

This new candidate fixes a bug that would always cause a hashsum mismatch error with archives that have no expected MD5 hash. That would happen at least with Blizzard classic games: Warcraft III, StarCraft, Diablo II.

---

The full changelog for this new release includes more stuff, you can read the full release notes here: http://git.dotslashplay.it/play.it/tree/CHANGELOG?h=release/2.34.0

The plan for this update is to gather your feedback, using the usual contact methods, and to finally release it once it has spent one full month with no new bug reported.