Are you sure? I’m not very active in that ecosystem, but if that was prevalent in the past, surely there’s still tutorials and stuff out there that people would follow and create such projects even today?
More than that, it seems to me that the official python docs for packaging [still] talks about setup.py. Why would people not use that?
he l p
looks like a multi-threading or concurrency issue
Have you considered creating a ticket called “Can’t ask questions without joining discord”?
Do you think it would have more answers if it were on GitHub discussions?
Release must be documented
It’s not a must [unless you put it into a contract], it’s a should or would be nice
Many, if not most, projects don’t follow a good, obvious, transparent, documented release or change management.
I wish for it, too, but it’s not the reality of projects. Most people don’t seem to care about it as much as I do.
I agree blind acceptance/merging is problematic. But for some projects (small scope/size/personal-FOSS, trustworthy upstream) I see it as pragmatic rather than problematic.
I would consider three four approaches.
1. Commit and push manually and deliberately
I commit changes early and often anyway. I also push regularly, seeing the remote as a safe and remote (as in backup) baseline and reference state.
The question would be: Do I switch when I’m still exploring things in the workspace, without committing when switching or moving away from it, and I would want those on the other PC? Then this would not be enough.
2. Auto-push all local git references into a separate space on the git remote
Git branches are refs, commit pointers, just like other refs are. And they can be put under arbitrary paths. refs/heads/
holds branches. I can replicate and regularly update all my branches under refs/pcreplica/laptop/*
. And then on the other PC, list or fetch those, individually, or all of them, regularly automatically, or manually.
git push origin refs/heads/*:refs/pcreplica/laptop/*
git ls-remote
git fetch origin refs/pcreplica/laptop/*:refs/laptop/*
3. Auto-push the/a local branch like you suggested
my concern here would be; is only one branch enough? is only the current branch enough?
4. Remoting into the other system
Are the systems both online? Can I remote into / connect into it when need be?
Has features ✅
Code before:
async function createUser(user) {
if (!validateUserInput(user)) {
throw new Error('u105');
}
const rules = [/[a-z]{1,}/, /[A-Z]{1,}/, /[0-9]{1,}/, /\W{1,}/];
if (user.password.length >= 8 && rules.every((rule) => rule.test(user.password))) {
if (await userService.getUserByEmail(user.email)) {
throw new Error('u212');
}
} else {
throw new Error('u201');
}
user.password = await hashPassword(user.password);
return userService.create(user);
}
Here’s how I would refac it for my personal readability. I would certainly introduce class types for some concern structuring and not dangling functions, but that’d be the next step and I’m also not too familiar with TypeScript differences to JavaScript.
const passwordRules = [/[a-z]{1,}/, /[A-Z]{1,}/, /[0-9]{1,}/, /\W{1,}/]
function validatePassword(plainPassword) => plainPassword.length >= 8 && passwordRules.every((rule) => rule.test(plainPassword))
async function userExists(email) => await userService.getUserByEmail(user.email)
async function createUser(user) {
// What is validateUserInput? Why does it not validate the password?
if (!validateUserInput(user)) throw new Error('u105')
// Why do we check for password before email? I would expect the other way around.
if (!validatePassword(user.password)) throw new Error('u201')
if (!userExists(user.email)) throw new Error('u212')
const hashedPassword = await hashPassword(user.password)
return userService.create({ email: user.email, hashedPassword: hashedPassword });
}
Noteworthy:
password
is. (In C# I would use a param label on call validatePassword(plainPassword: user.password)
which would make the interface expectation and label transformation from interface to logic clear.Structurally, it’s not that different from the post suggestion. But it doesn’t truth-able value interpretation, and it goes a bit further.
So it really is that simple: a small bash script, building locally, rsync’ing the changes, and restarting the service. It’s just the bare essentials of a deployment. That’s how I deploy in 10 seconds.
I’m strongly opposed to local builds on any semi-important or semi-complex production product or system.
Tagged CI release builds give you a lot of important guarantees involved in release concerns.
I’ll take the fresh checkout and release build time cost for those consistency and versioned source state guarantees.
learned from 10 years/millions of users in production
10 years per millions of users is an interesting metric :P
deleted by creator
Maybe all bunnies are actually snails with a fur coat on.
I would like to see TS as the first class citizen however, with JS being deprecated essentially.
What do you mean by that.
From what I read, Deno does primarily use and target TS. They label all that JS stuff as backwards-compatibility and ability for a migration path.
By Fresh you mean Fresh, the deno web framework? (So it’s deno too.)
I’m not in (or into) the JS ecosystem. I’m glad I didn’t have to dive into that at work yet. But I’ve used deno and bun in the past to evade installing NodeJS.
Just now I used deno v2 to build a static website I contributed a fix to, and it worked. I’m very glad to see I don’t have to juggle different npm alternatives or be stuck without when I want to contribute but definitely do not want to install NodeJS.
The deno install was hilariously slow downloading and installing the JS libs into the node_modules folder. 150 MB of JS source code. For a simple static website generator.
Comparing it to the hugo.exe binary (go, single binary static website generator): That one is 80 MB. Not having to juggle many files makes it a lot faster and compact of course.
The deno.exe is 107 MB. Which is a chunky size; but man it provides a lot. When you contrast that to the node_modules folder… lol
The announcement also mentions and links to JSR for TypeScript module publishing platform, also with backwards compatibility and automatic stuff generating. Which also seems like a good effort.
A strength of the GPL is that the community can fork projects, and “take them over” that way.
At the same time, and this instance is such a case, on a centralized platform, projects can be taken over instead of be forked.
They developed and published a plugin. Now it’s been taken over by someone else, on the primary distribution and discovery platform, and they have no control over it. Worse than that, the takeover now offers their sold functionalities for free.
This makes the “open source but not free, but after two years true FOSS licensed” licenses look very useful if not necessary for businesses and developers that want to monetize. At the very least when they [have to] use centralized platforms.
They have taken over the ACF plugin in the plugin store. In an intransparent manner. It is GPL licensed, but had a pro license and features sold. And still does have them on their publishers side.
A strength of the GPL is that the community can fork and take over projects.
At the same time, and this instance is such a case, on a centralized platform, projects can be taken over instead of be forked.
They developed and published a plugin. Now it’s been taken over by someone else, on the primary distribution and discovery platform, and they have no control over it. Worse than that, the takeover now offers their sold functionalities for free now.
This makes the “open source but not free, but after two years true FOSS licensed” licenses look very useful if not necessary for businesses and developers that want to monetize. At the very least when they [have to] use centralized platforms.
What a mess.
URL is still advanced-custom-fields, but then named Secure Custom Fields. Translations and source repo still map to the old name. It definitely is a takeover, not a “fork” in the classic, established sense.
The problem with the takeover is, of course, that the original publisher still develops, publishes, and sells their original plugin. Their official website now serves their own version with their own update source.
So you kinda don’t but also have to rename it to avoid confusion.
I think a rename to something different is wrong and confusing though. It should add a disclosing addition, like “(Taken Over)” or “Adjusted” or “WPorg edition”.
A supposed, partial rename is confusing. No information in the README is confusing, intransparent, and disingenuous. No clarity in the release notes is confusing.
Simply freeing previously and still sold pro features, without disclosing that fact, is very questionable. Not fair to the developers and certainly not transparent to the community.
Clearing the changelog and release log documentation, removing previously available information, is questionable as well.
I see in the readme.txt file that the plugin is licensed under GPL.
So the changes are permissible. And being able to do so is certainly a strength of the FOSS license.
My biggest issue is that they remove information, and rename without indication. It should be transparent and, within context and concerns, fair. Not like this.
Looking at the commit log:
6 days ago, 6.3.6.1 was tagged with
Security - ACF defined Post Type and Taxonomy metabox callbacks no longer have access to $_POST data. (Thanks to the Automattic Security Team for the disclosure)
14 hours ago, 6.3.6.2 and rename
- Security - Harden fix in 6.3.6.1 to cover $_REQUEST as well.
- Fork - Change name of plugin to Secure Custom Fields.
It also removes is-pro and pro-license-active checks, but fails to disclose so in the release notes.
Effectively, it frees pro functionalities.
It also removes all previous change log and release information.
By any chance, do you use a niche language that has only two programmers?
I am very proficient in my primary language, C#.
Writing more context out feels like boasting, so I think I will skip that and go to a summation/conclusion directly.
Knowledge and expertise comes from more than the language. Which you hinted at. The language is only our interface. How is the language represented, how will it transform the code, how will it be run. There’s a lot of depth in there - much more than there is in the language itself.
I learned a lot, through my own studies and reading, studying, projects, and experience. I’m a strong systematic thinker. It all helps me in interpreting and thinking about wide- and depth- context and concerns. I also think my strengths come at the cost of other things, at least in my particular case.
You’re not alone. Most developers do not have the depth or wide knowledge. And most [consequently] struggle to or are oblivious to many concerns and opportunities, and to intuitively or quickly understand and follow such information.
Which does not necessarily mean they’re not productive or useful.
The field is incredibly broad. Choose a field or employer or project that’s not doing that an you’re fine.