It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
low rated
They got what they deserved, Karma works that way!

They did something stupid, Like not securing their PC's with Anti-exploits and anti-rootkits
Then someone came along and stole shit from them because they thought they were invincible.
Post edited February 26, 2021 by fr33kSh0w2012
low rated
avatar
Orkhepaj: these should be regulated, companies shouldn't write their own eulas, they should only pick one from the predefined ones government accepts and clearly show to the buyer which one is it, so the buyer just can check it on the gov portal and once read it and know surely there is no hidden crap in it somewhere. Only then these would be acceptable.
avatar
kohlrak: And in another thread you say you're anti-authoritarian. XD

But anyway, i believe this gives government way too much power. I believe laws should be written such that a contract's enforce-ability is dependent upon when the agreement was made, first and foremost. From there, a little more chaos would be fine: NDAs can exist, but, perhaps, with a timer and only be enforceable on specifics (like, GOG could say that a deal between the two is private, as well as certain details, but not details of reasonable consumer interest, such as whether or not an update process appears automated on GOG's end, but specifically still barring specifics about what types of files they see [I don't need to know that GOG uses, inno, really, but it would be nice to know some examples on GOG's rejections of updates and on what grounds, as well as whether or not those grounds were valid]).
yes :D
that's why these labels are not so great, it mostly depends on circumstances
without regulation power free market stops working as the strongest will have too much power which will result in free market become not free at all , but with too much regulations it is the same , have to find the good middle :P

Don't think so, buyers should have an easy and clear method to get information what they are actually buying , these eula-s are nor easy nor clear.
Nobody should be expected to reread all these blablas every time they buy something, and look for all the little gotchas in them, I like how eu shares this view.
These are not job contracts which you only need to read and accept once in a few years.
avatar
fr33kSh0w2012: They got what they deserved, Karma works that way!

They did something stupid, Like not securing their PC's with Anti-exploits and anti-rootkits
Then someone came along and stole shit from them because they thought they were invincible.
yeah, they probably went cheap on this ,and probably an inside job anyway
Post edited February 26, 2021 by Orkhepaj
low rated
avatar
kohlrak: And in another thread you say you're anti-authoritarian. XD

But anyway, i believe this gives government way too much power. I believe laws should be written such that a contract's enforce-ability is dependent upon when the agreement was made, first and foremost. From there, a little more chaos would be fine: NDAs can exist, but, perhaps, with a timer and only be enforceable on specifics (like, GOG could say that a deal between the two is private, as well as certain details, but not details of reasonable consumer interest, such as whether or not an update process appears automated on GOG's end, but specifically still barring specifics about what types of files they see [I don't need to know that GOG uses, inno, really, but it would be nice to know some examples on GOG's rejections of updates and on what grounds, as well as whether or not those grounds were valid]).
avatar
Orkhepaj: yes :D
that's why these labels are not so great, it mostly depends on circumstances
without regulation power free market stops working as the strongest will have too much power which will result in free market become not free at all , but with too much regulations it is the same , have to find the good middle :P
In a truly free market, the strong earn their power and have to continue earning to keep it. These mega-corporations have power on the basis of the corporate shield and all sorts of laws in their favor. It's harder to deplatform in a free market, because at every corner and avenue of a form of censorship, a new market would form creating an avenue for competition. I can't just sit down, right now, and set up my own VISA or Mastercard-like service to undermine their foothold, because i'd have to comply with all sorts of regulations. Meanwhile, if there were no regulations, or almost no regulations, i could probably say "give me your money, i have no reputation of theft, and i'll only take 1 penny off of every dollar for up to 10 bucks in value" and the fans of the deplatformed might be hesitant, but the deplatformed business would be a fool to reject the money i'm giving them on behalf of some schmuck who took a chance on me. Thus, i very quickly start to create a reputation for my business, and with that i can slowly start making more and more transactions until I can afford employees or even a machine to do transfers for me. Of course, what's a bank going to do with customers complaining that they refuse to let them spend and/or receive their money?
Don't think so, buyers should have an easy and clear method to get information what they are actually buying , these eula-s are nor easy nor clear.
Nobody should be expected to reread all these blablas every time they buy something, and look for all the little gotchas in them, I like how eu shares this view.
These are not job contracts which you only need to read and accept once in a few years.
I think you misunderstand. An NDA (non-disclosure agreement) is a contract between two companies. So my NDA example, wouldn't apply to you: it'd apply to GOG and Devolver, for example. It's not my business to know the specific details of GOG's internal management for installer production. However, if there's a problem with the service, perhaps I am entitled to an explanation from either GOG or Devolver on why banana simulator DLC #3 is 5 versions behind the Steam version, rather than let them point fingers at each other and be like "it's their fault." You should be able to make a reasonable case for me to make my own decision without releasing the home address of the guy working for GOG who would be assigned to the task.

Similarly, i do think it's reasonable to have an EULA clause that I don't sell access to my GOG installers.
low rated
avatar
kohlrak: I think you misunderstand.
yeah looks like we talked about different things , i talked about Eula-s from end customer point you from game publishers point, happens :P
avatar
kohlrak: Don't get me wrong, i believe an argument should stand or fall independent of their speaker, but i really must ask of your familiarity with the subject matter.
About a decade of industry experience, practically only in small to medium sized companies. I am currrently a devops/SRE. My previous job titles in descending order: senior backend developer, software architect, devops engineer, frontend developer, fullstack web developer.

My experience with immutable infra is somewhat more recent (I was exposed to the concept 3-4 years ago). The system I'm currently designing is the most faithful implementation I've worked on so far (atm, we ssh on the bastion and less than 10% of the machines to investigate sometimes, I'm confident we won't be sshing at all unless a database crashes within a year or two).

avatar
kohlrak: Indeed, the notion of keeping everything in a VM, doing constant scrubs and all that is great on paper and indeed idealistic.
Its not. I'm using Terraform and am literally provisioning and tearing down fleets of vms with git. Its great. I haven't had do a single update so far. If I need to update, I create a new vm with the update and destroy the obsolete old one.

Its all in git. If someone needs to understand what happened, they just need to look at the git history.

avatar
kohlrak: The problem, however, is that these computers are actually designed to do something. I'm sure you'd have a canary seeing the things i've seen passed off as "security" in things like hospitals (where you and i would certainly be in agreement as woefully inadequate).

However, these machines serve a purpose outside of strictly being black boxes holding data. Removing SSH, for example, until you need it would require someone to man the office at all times just to enable or disable it.
You ssh into a machine, usually to change its state. You do a bunch of manual things, its not really audited by anyone.

People have a hard time getting a mental picture of all the things that were done on the machine.

When the machine fails, people are not quite sure if its the its the core software or something that was manually done on the machine that caused the problem.

And of course, everybody is deadly afraid to just toss that monstrosity away and reprovision it anew, because they are afraid to lose some undocumented work that somebody did on it.

Its the anti-thesis of what security needs to be. I guarantee you that with infrastructure as code, metrics/centralised logging and gitops, you don't really need to ssh into anything 99%+ of the time.

avatar
kohlrak: Moreover, VMs, even with hardware virtualization, still don't have the processing power necessary to accomplish the tasks most likely occurring: Cyberpunk 2077 isn't going to be debugged in a VM. Or are you talking about a gateway in particular, and not all the servers as a whole?
The cloud is running on vms and a lot of stuff is happening in the cloud. Its less efficient than bare-metal, but don't diss it, CERN is doing a lot of things on an Openstack private cloud.

avatar
kohlrak: Also, with all those precautions, you should be made aware that there exist viruses that target VMs, to bypass this kind of protection.
Sure, the cloud engine can be attacked and it requires some expertise to maintain it. Nothing is free (I did delve into the internals of Openstack, I know how messy it is).

However, in my case and in the case of most cloud users, I'm not maintaining it, I'm using it. Might as well use it properly right?
Post edited February 27, 2021 by Magnitus
avatar
Zrevnur: And the article also says that employees need to give their personal home computers (at least that is how I interpret it **) to CDPR for scanning.
Hopefully (and I would guess most likely) it is the work computers that they have at home and they aren't developing on personal computers.
avatar
Zrevnur: And the article also says that employees need to give their personal home computers (at least that is how I interpret it **) to CDPR for scanning.
avatar
joveian: Hopefully (and I would guess most likely) it is the work computers that they have at home and they aren't developing on personal computers.
My 'most likely' guess is that lowly employees dont even have dedicated 'work computers' at their homes...
This/CDPR is in Poland: See for example here:
https://en.wikipedia.org/wiki/List_of_European_countries_by_average_wage
And while I havent looked myself at their job offers or dont remember I remember people here in the forum joking about how little they offer. So I dont think employees have enough money for sth like multiple PCs and I dont see the backwards-managed CDPR getting them computers either...
low rated
avatar
kohlrak: Indeed, the notion of keeping everything in a VM, doing constant scrubs and all that is great on paper and indeed idealistic.
avatar
Magnitus: Its not. I'm using Terraform and am literally provisioning and tearing down fleets of vms with git. Its great. I haven't had do a single update so far. If I need to update, I create a new vm with the update and destroy the obsolete old one.

Its all in git. If someone needs to understand what happened, they just need to look at the git history.
And if not with things like SSH, how does the GIT connect?
avatar
kohlrak: The problem, however, is that these computers are actually designed to do something. I'm sure you'd have a canary seeing the things i've seen passed off as "security" in things like hospitals (where you and i would certainly be in agreement as woefully inadequate).

However, these machines serve a purpose outside of strictly being black boxes holding data. Removing SSH, for example, until you need it would require someone to man the office at all times just to enable or disable it.
You ssh into a machine, usually to change its state. You do a bunch of manual things, its not really audited by anyone.

People have a hard time getting a mental picture of all the things that were done on the machine.

When the machine fails, people are not quite sure if its the its the core software or something that was manually done on the machine that caused the problem.

And of course, everybody is deadly afraid to just toss that monstrosity away and reprovision it anew, because they are afraid to lose some undocumented work that somebody did on it.

Its the anti-thesis of what security needs to be. I guarantee you that with infrastructure as code, metrics/centralised logging and gitops, you don't really need to ssh into anything 99%+ of the time.
Yet i see SSH used alot for SFTP. People seem to be afraid of the dreaded command line.

But, yes, if a machine is compromised, it doesn't do you nay good not to reset it. You'd have to painfully go line by line looking for malicious code, and go through every file, and that's assuming you've switched to another OS to do so (like auditing a windows installation with linux). A reset is far, far more viable, and someone has the up-to-date GIT if it's for versioning. You can then filter and audit the GIT alot more easily when isolating it.
avatar
kohlrak: Moreover, VMs, even with hardware virtualization, still don't have the processing power necessary to accomplish the tasks most likely occurring: Cyberpunk 2077 isn't going to be debugged in a VM. Or are you talking about a gateway in particular, and not all the servers as a whole?
The cloud is running on vms and a lot of stuff is happening in the cloud. Its less efficient than bare-metal, but don't diss it, CERN is doing a lot of things on an Openstack private cloud.
I'll diss it. "The cloud" is certainly "nebulous," and has become one of the modern day catch-alls. "Oh, we just need to buy another node!" Depends on your budget and task. However, cloud and VM vs bare are two separate issues: you can get the same aglorithmic benefits with a bare-metal cloud. The real issue is how many resources are lost in virtualization, as well as whether or not a task is something that's prone to cloud computing. As seen in the quote from my post: "Cyberpunk 2077 isn't going to be debugged in a VM."
avatar
kohlrak: Also, with all those precautions, you should be made aware that there exist viruses that target VMs, to bypass this kind of protection.
Sure, the cloud engine can be attacked and it requires some expertise to maintain it. Nothing is free (I did delve into the internals of Openstack, I know how messy it is).

However, in my case and in the case of most cloud users, I'm not maintaining it, I'm using it. Might as well use it properly right?
Indeed.
avatar
kohlrak: And if not with things like SSH, how does the GIT connect?
Git is easily auditable, repeatable and most git platforms have fine-grained access control and review processes.

An ssh session doesn't provide those things. It wasn't built to scale.

Its not directly about security. Its about a host of other benefits that will make your system more manageable and thus more secure.

avatar
kohlrak: Yet i see SSH used alot for SFTP. People seem to be afraid of the dreaded command line.

But, yes, if a machine is compromised, it doesn't do you nay good not to reset it. You'd have to painfully go line by line looking for malicious code, and go through every file, and that's assuming you've switched to another OS to do so (like auditing a windows installation with linux). A reset is far, far more viable, and someone has the up-to-date GIT if it's for versioning. You can then filter and audit the GIT alot more easily when isolating it.
Not just if its compromised. If you need to upgrade it (lets face it, upgrades are a risk, things can go wrong, better to do that in an offline image in some pipeline, validate that it works and then just push the image live). If it becomes messed up after something goes horribly wrong (not all processes behave well with regard to the rest of the system when they abruptly terminate, especially if some resource caps were reach).

Basically, to put a hard reset on all the little imperfections that permeate man-made systems.

Otherwise, its not the command line that I'm afraid of. It actually takes longer to automate a vm provisioning properly in a repeatable way with something like a pre-built image, cloudinit or a mix of the two then to just sling a bunch of ssh commands together on a blank vm over ssh... the first time.

However, the automated vm is repeatable, its speed after the first iteration is unmatched (I don't care how fast you type, you're not typing THAT fast), shielded from typing errors or forgotten steps and if I get run over by a bus, whoever has to take over won't be dying from a panic attack (depending on what he knows, he might need to Google some stuff up to fully understand the code, but he won't ever have to worry about knowing what I did, its codified).

You don't want to be the guy who does ninja stuff by hand and then leave people scrambling in his wake trying to untangle the magic. Everybody who has to clean up after him ends up absolutely hating that guy. We aren't magicians making snowflake magical wonders whose like will never be repeated again. We're engineers making correct, precise, repeatable systems.

avatar
kohlrak: I'll diss it. "The cloud" is certainly "nebulous," and has become one of the modern day catch-alls. "Oh, we just need to buy another node!" Depends on your budget and task. However, cloud and VM vs bare are two separate issues: you can get the same aglorithmic benefits with a bare-metal cloud. The real issue is how many resources are lost in virtualization, as well as whether or not a task is something that's prone to cloud computing. As seen in the quote from my post: "Cyberpunk 2077 isn't going to be debugged in a VM."
Sure, if you have something that runs on a Windows machine, you ain't gonna debug it on a Linux server. With gpu passthrough, you might be able to debug it in a windows vm though. But yeah, you get a lot less benefit from virtualization on a desktop (I won't say none, its always nice to be able to try something radical on a full fledge os and then tear it down without having to live with the mess). Virtualization really shines in the cloud.

I'll be frank though, I haven't dabbled with Windows-specific desktop software since like 2009. Its just not something I'm very interested in anymore. The only Windows-compatible stacks that interest me at this point are the ones that are portable (web, cross platform languages and frameworks, etc).
Post edited February 27, 2021 by Magnitus
avatar
Magnitus: To be fair, a lot of people are not doing it, it's horrible. They'd rather do 10 units of work than 2 units of work and 2 units of learning. work dumb, work harder.
Yeah, I don't get why they switched from their previous ticketing system to Zendesk (Especially when Zendesk just seems to be a propitary SaaS for suckers.) especially when there are GPL licensed choices., but at least Mantis (their Galaxy issue tracker of choice) is open source.
avatar
kohlrak: And if not with things like SSH, how does the GIT connect?
I misread that part of your post. I will try to address this part of your interrogation now.

First of all, to initially provision vms (ie, the ssh connect to a blank vm scenario), ideally, you use pre-made vm images (to have your dependencies pre-installed in a verified valid way) and then you use cloudinit (https://cloudinit.readthedocs.io/en/latest/) to pass initial configuration logic to your vm when it is created, like so:

https://github.com/Ferlab-Ste-Justine/openstack-postgres-standalone/blob/master/main.tf#L44
https://github.com/Ferlab-Ste-Justine/openstack-postgres-standalone/blob/master/templates/cloud_config.yaml

Note that in the above case, I technically install dependencies from cloudinit which is less robust (getting dependencies over a network is a source of error and is more time consuming than having them baked into the image) and is purely due to time constraints. The medium to long term strategy here will be to only pass configuration settings to cloudinit and boot the vm with an image that already has installed dependencies baked in.

To push the terraform orchestration to your cloud, you have two options:
- Create a pipeline in a terraform git orchestration repo that interacts with your cloud provider (would usually run when you merge to the master/main branch for example as master/main would be the source of truth in your system)
- Create a service directly in your system that listens to changes in the git repos (again, probably to the master/main branch) and execute them against the cloud provider

For updates, you just create a new vm, change the state of your system to use it and then scrap the old one. There is no "update" in the sense that you are mutating an existing vm. For more old-school stateful solutions like postgres, this will usually entail a bit of downtime (the state is not really made to be distributed across several running vms). For stateless or properly distributed stateful applications, you can achieve this without downtime for your end user.

For your kubernetes orchestration, you can use fluxcd that will listen to a git repo and sync changes to your k8 cluster against a given branch that will be the source of truth for the state of your cluster. Example:
https://github.com/Ferlab-Ste-Justine/cqdg-environments/tree/master/qa

Here, rolling back a bad keycloak deployment looked like this: https://github.com/Ferlab-Ste-Justine/cqdg-environments/commit/2db55e715f3f00a19a49b6fcc341bec0caf2d124

I didn't do anything by hand, I just changed the orchestration code and let fluxcd's autosync do its work.

You can make your own in-house tooling for a lot of other systems to behave the same way. For example, Airflow will automatically change its jobs based on Python code in a "DAG" folder.

If you can just pass a directory to airflow and autosync that directory against a git repo like so, you are in business:
https://github.com/Ferlab-Ste-Justine/cqdg-environments/blob/master/qa/airflow/deployments-override.yml#L15
https://github.com/Ferlab-Ste-Justine/cqdg-environments/blob/master/qa/git-autosync/configmaps.yml#L15
https://github.com/Ferlab-Ste-Justine/cqdg-dags
Post edited February 28, 2021 by Magnitus
avatar
Magnitus: Not just if its compromised. ...Basically, to put a hard reset on all the little imperfections that permeate man-made systems....However, the automated vm is repeatable,...You don't want to be the guy who does ninja stuff by hand and then leave people scrambling in his wake trying to untangle the magic. Everybody who has to clean up after him ends up absolutely hating that guy. We aren't magicians making snowflake magical wonders whose like will never be repeated again. We're engineers making correct, precise, repeatable systems.
I think if we leave these as they are here, understanding there is more context to them, we see a fundamental flaw in the logic. The reason you have to do cleanup and things like that is becaus, well, as you said, man-made systems have imperfections. You solution is to roll them back to, well, man-made imperfections at a different layer. Perhaps it is indeed an improvement, but, at the end of the day, it's goalpost moving, rather than solving the fundamental problem: instead of the imperfections of every user, you have the imperfections of every coder who made the system. And that boils down to, "who gets to decide what default settings are the good ones?"

And it also misses the fundamental problem I have with the whole "reset" thing. You are logging into a computer to make changes and store a result. If you reset, you're not storing a result, hence my comment on "gateway."

Sure, if you have something that runs on a Windows machine, you ain't gonna debug it on a Linux server. With gpu passthrough, you might be able to debug it in a windows vm though. But yeah, you get a lot less benefit from virtualization on a desktop (I won't say none, its always nice to be able to try something radical on a full fledge os and then tear it down without having to live with the mess). Virtualization really shines in the cloud.

I'll be frank though, I haven't dabbled with Windows-specific desktop software since like 2009. Its just not something I'm very interested in anymore. The only Windows-compatible stacks that interest me at this point are the ones that are portable (web, cross platform languages and frameworks, etc).
This is part of the problem. I don't know what you're actually messing with. I'm a big fan of Linux, myself, and, outside of storing files, it does a pretty good job of returning to a default state when i am not using root. That bash variable i set in that script? It's gone before i even logout. I don't need to reinstall my OS over that.

But, as for windows, it's not that different. Most programs are pretty good at not relying on environment variables and even making user-specific configurations when you don't have a sysadmin poking around in the "default configs."

avatar
kohlrak: And if not with things like SSH, how does the GIT connect?
avatar
Magnitus: I misread that part of your post. I will try to address this part of your interrogation now.

First of all, to initially provision vms (ie, the ssh connect to a blank vm scenario), ideally, you use pre-made vm images (to have your dependencies pre-installed in a verified valid way) and then you use cloudinit (https://cloudinit.readthedocs.io/en/latest/) to pass initial configuration logic to your vm when it is created, like so:

https://github.com/Ferlab-Ste-Justine/openstack-postgres-standalone/blob/master/main.tf#L44
https://github.com/Ferlab-Ste-Justine/openstack-postgres-standalone/blob/master/templates/cloud_config.yaml

Note that in the above case, I technically install dependencies from cloudinit which is less robust (getting dependencies over a network is a source of error and is more time consuming than having them baked into the image) and is purely due to time constraints. The medium to long term strategy here will be to only pass configuration settings to cloudinit and boot the vm with an image that already has installed dependencies baked in.

To push the terraform orchestration to your cloud, you have two options:
- Create a pipeline in a terraform git orchestration repo that interacts with your cloud provider (would usually run when you merge to the master/main branch for example as master/main would be the source of truth in your system)
- Create a service directly in your system that listens to changes in the git repos (again, probably to the master/main branch) and execute them against the cloud provider

For updates, you just create a new vm, change the state of your system to use it and then scrap the old one. There is no "update" in the sense that you are mutating an existing vm. For more old-school stateful solutions like postgres, this will usually entail a bit of downtime (the state is not really made to be distributed across several running vms). For stateless or properly distributed stateful applications, you can achieve this without downtime for your end user.

For your kubernetes orchestration, you can use fluxcd that will listen to a git repo and sync changes to your k8 cluster against a given branch that will be the source of truth for the state of your cluster. Example:
https://github.com/Ferlab-Ste-Justine/cqdg-environments/tree/master/qa

Here, rolling back a bad keycloak deployment looked like this: https://github.com/Ferlab-Ste-Justine/cqdg-environments/commit/2db55e715f3f00a19a49b6fcc341bec0caf2d124

I didn't do anything by hand, I just changed the orchestration code and let fluxcd's autosync do its work.

You can make your own in-house tooling for a lot of other systems to behave the same way. For example, Airflow will automatically change its jobs based on Python code in a "DAG" folder.

If you can just pass a directory to airflow and autosync that directory against a git repo like so, you are in business:
https://github.com/Ferlab-Ste-Justine/cqdg-environments/blob/master/qa/airflow/deployments-override.yml#L15
https://github.com/Ferlab-Ste-Justine/cqdg-environments/blob/master/qa/git-autosync/configmaps.yml#L15
https://github.com/Ferlab-Ste-Justine/cqdg-dags
Ok, reading this post, i'm absolutely certain i know what's going on. You're not doing resets, like you say. What you're doing is a "more reliable cleanup." The issue, here, is that we have totally different starting point models, which is why we have an issue.

In my setup, i code on the same platform where the code runs. Why? Because I only have 1 computer to work with, and it's running a pentium 4. I think that, given my economic situation, this is excusable, especially as i'm the only one with SU access. And, even then, i have disclaimers warning about where my security couldbe flawed. I'm also not a corporate entity.

Now, a corporation, on the other hand, has more resources to work with. A corporation like GOG should be able to have dedicated SQL, GIT, and other servers on separate hardware. If the only purpose of the SQL server is to handle SQL, like it should be, then the SSH creditionals should not only exist behind a gateway, but should only be given to supervisors of the SQL team: you don't need SSH to do SQL, but you might need it to do security updates when you can't get to the office and something gets found and patched (IRL, though, most can get to the office). You GIT server exists for storing code, versioning, etc, and is likely to be access by various teams, but, when possible, SSH shouldbe limited to SFTP (i'm not sure how to go about that, but certainly should be the case). If i'm testing code, I shoudl be running it on the machine i'm coding on, so i don't need to use VIM and other things on another machine. I most definitely don't need root access, unless my supervisor can't setup my account to give me access to my GIT repo.

The question i have for CDP is, why investor data was stored on the same server as the GIT repo? Because, well, that was the info stated to be compromised. Either that, or there are "universal credentials," at which point we need to ask if every game on GOG was stolen and what about customer info? We clearly had information on servers that didn't need to be on those servers, or should have had dedicated servers to the tasks. I'm on an IRC network where people are saying the monthly prices of a VPS is between 5 and 10 bucks a month, for very good performance, indicating to me that CDP running it's own hardware should be able to run it even cheaper. Realistically, CDP could have dedicated hardware for a GIT server using a raspberry pi with a multi-terabyte hard drive attatched via USB. How often are they committing code (then again, we have the usual corporate practice of the daily commit to deal with, but simple re-analysis of that could easily result in speicific commits on specific goals or dedicated hours on certain days to commit or something like that)?

SQL's a little heavier, but i think we can put the investor data on a separate SQL server from the customer data, and the best the GIT server should hold is Lorem Ipsum. I could go on, but i'm running out of space for this reply. I don't think complete resets are necessary if the holes weren't as present or lucartive in the first place.
Starting yesterday source codes for the witcher 3 Next-Gen and cyberpunk available for download.

True or not, size is around 761.78 GB
Post edited June 05, 2021 by DrazenCro
I would expect most of that to be high resolution uncompressed textures, models and sounds.

There certainly wouldn't be 700GB of code.
avatar
Mortius1: I would expect most of that to be high resolution uncompressed textures, models and sounds.

There certainly wouldn't be 700GB of code.
1.44 MB of code. The rest is textures, models, sounds, lighting, and one uncompressed .RAW image of Triss's bum that takes up 8 GB and is atomically accurate.