Because in a few months, people won't remember the details, but they will remember "the time the Wordpress guy abused his influence to damage the Wordpress ecosystem".
Or, alternatively they could remember "the time the Wordpress guy smacked freeloader leeching off the Wordpress ecosystem"
Apart from that - major turbulences in the WP and in general CMS world could be a positive thing. Maybe it's time for a new player in the space. Wordpress absolute dominance for basically decades kind of sucks air out of the space for competitors, there are some like Ghost and others but they are barely crawling compared to WP market share.
Apart from that even fork within WP itself wouldn't necessary be a bad thing - some decisions and direction of the WP itself are questionable looking from developer standpoint like bringing to life insanely complicated React-based toolkit as WP editor building block, archaic conventions in the PHP codebase, lack of standardized patterns and guidelines for plugins creation and many more.
Personally I would love to have PHP-based CMS, built either based on Symphony or Laravel with extensive plugins and theming, capabilities and resonable market share.
Yeah, I recently looked into their "Starshot"[0] initiative to make their CMS more appealing and it's interesting to some degree, but we'll see when it comes out - presumably ~ Jan 15, according to the video
That's also one of my wishes to get improved, currently I just have a long text file where I store them so that if I move servers I can just re-run them if needed.
Could you put them in a .sh file and then just run `sh setup_dewey.sh`? Maybe put `&&` between them so that if one fails, it won't keep running through the script?
Yep, in theory I think that should work nicely. So the recovery procedure after a server died would be to restore the dokku data directory from backup and then re-run all the commands. I haven't tested that but I think that should do the job.
Right now I keep the list of commands more as a reference to look up things like how I mounted a volume or my naming scheme for data directories.
Exactely, I was really surprised that dokku isn't all based on storing these commands in a config/script which gets executed every time you change something.
These assertions are just not true. I'm working right now on a document with coworkers, concurrently, via a Sharepoint server. Change tracking is enabled and can even be enabled by default by the Sharepoint admin.
I share a lot of your sentiment about open formats in previous comments, but this is classic FUD and demonstrably wrong.
Then our Sharepoint server must have disabled sth. I regularly get a message on save that the document has changed meanwhile, with the option to overwrite those changes or discard my own...
No change tracking is visible to me, other than a timestamp and name of last edited.
FUD is isn't. Please don't use buzzword gratuitously, I know _a lot_ more about the deployment here than you. Also, you didn't counter the statement that it doesn't do concurrent editing (like Google Docs, OnlyOffice), which in 2024 is a major gap imho.
Your original comment was speaking in general terms, about all Sharepoint services. Not your specific deployment. Details matter or your argument falls down.
As for concurrency, read my comment again, I use the specific word.
Some thoughts on this from my perspective as a white, European, who was graciously invited to speak at DjangoCon Africa. I am not involved at all with the PSF, but my company has given the PSF money in the past.
I think that I could sense the slight uncertainty about the conference, and the post above had made plain why. Considering what more could have been done if that uncertainty was resolved more quickly, the PSF’s procrastination (which is a generous interpretation, a less generous interpretation could be “tactical stonewalling”) is very disappointing.
I generally supports PSF initiatives, I hope they will answer publicly and if necessary make commitments to improve their communications and processes. The PSF need to recognise the pent-up demand for support and financial aid outside of the Anglosphere. With good leadership they could make a huge contribution.
I appreciate the difficult situation that the organisers are now in: it takes courage to call out poor behaviour.
This doesn’t diminish the organisers or the event in any way: it was a huge success from my perspective as an attendee. DjangoCon Africa was the most invigorating conference experience I have had in years.
What strikes me most about this is that, apparently by the timeline described in the open letter, the conference organizers planned the event with the expectation they'd get funding and asked for the funding fewer than five months before the event. That seems like extremely poor planning from my perspective.
Maybe that's normal for events given money by PSF, I dunno, but "we've already decided to do this so hurry up and say yes now or it's your fault" doesn't seem entirely reasonable to me. It is, unfortunately, something I've run into time and again, with community driven events.
Yeah, just dug that up myself. I still feel that you ask for funding and then plan the event, if the funding is crucial, but clearly the PSF set an expectation that it didn't live up to.
Note the same FAQ, though, says this: "There is no set maximum, but grants are awarded with consideration for the annual PSF grant budget and that events/sprints will be virtual...with that in mind are willing to consider up to $2,500 per request for larger virtual events."
My guess is that this FAQ needs an update now that the PSF is once again willing to fund in-person events.
It's a bit chicken and egg though: I would expect that a grant would be much more likely to be granted for an event that had a decent amount of groundwork already done? For new events especially, you'd want to see some commitment.
I'd want to see a plan, for sure. I would, in the place of the PSF folks, not want to be handed a "this is already in motion, and we are depending on funding so you better say yes."
This doesn't appear to be the case, and they had a plan if the PSF had said no. The problem is that they didn't say anything, which in turn meant they couldn't finalize the program or answer to their own requests for travel expense (either way).
In case you've never organized events before, it's a bit of a catch-22. You need to be prepared to accomplish what your funding request entails before you are awarded, to the degree of satisfaction of the grantor.
When the criteria and timelines are unclear, as they were here, the requestor is put in a position where budgeting becomes a more serious jeopardy to an event than any planning concern.
The PSF should have responded. There's really no cause for what they did, even if it was only miscommunication.
I’ve organized a lot of events, and I’ve been in the position of holding funding for community events. I’ve never put an event in motion with uncertain funding like this, ever. If I can’t do it without funding from a single entity and they may say “no” then I don’t set things in motion until they’ve said yes.
The letter says it was approved "nearly three months" after, and that's including the request for more information (idk if that should count as "processing" delay or not).
Definitely over 6 weeks but not as much as I first assumed.
I don’t see why it would take more than one PSF board meeting to approve funding for an event. Even if the board had questions, the follow-up is supposed to happen quickly. Several months to approve $9000 is silly. That’s probably just the catering budget for a 3-day, 200-person conference in the Bay Area - let alone for a conference for all of Africa’s Python community.
The FAQ[1] asks for 6 weeks before an event, so I suppose that's fair. That still seems like poor planning, though -- clearly the event was planned the the presumption that the funding would be approved. What would this open letter have looked like if the ask had been turned down altogether?
IMO the order of operations should be 1) ask for funding, 2) plan the event, not the other way around.
6 months is way in advance. PSF policy is to ask 4-6 weeks before the event (Timeframe: We require that applications be submitted 6 weeks before the event/project start date - this gives us enough time to thoroughly review, ask questions, and have enough time to send you the funds.), so 6 months is more than enough time.
And we're missing the issue: the organizers were asking for $9k, which by all means should have been an easy yes/no. If they would have asked for 100k, maybe, but $9k is chump change.
The funding FAQ gives general guidance of $2500 at the high end and still discusses virtual events. I suspect some of the problem is the process is for very low budget meetups, and virtual things, not six-day real world events.
I'm missing something: you're saying 5 months in not enough time for a bunch of people to meet up and say "yeah, we're giving money for this event" or "no, we're not interested" ? It should be a week or two to decide, not months.
I'm very disappointed in the PSF leadership. Their organization has a clear mission, and it's leaders need to be held accountable. Folks leading institutions ought to understand that they are placed there as stewards, not as an opportunity to exercise leverage in other areas, regardless of their good intention. People who donate to PSF do not want to be an instrument for another cause, they could just as well donate to that cause.
It's a shame that for a GUI toolkit, there are no screenshots, either on Github or the official project website. It's very difficult to evaluate this based on descriptions alone.
Does anyone have any examples of promoting to feed such a large amount of tokens? For example, would you use something like “I am going to send you an entire codebase, with the filename and path, followed by the file content. Here is the first of 239 files: …”
It works really well, you can tell it to implement new features or mutate parts of the code and it having the entire (or a lot of) the code in its context really improves the output.
The biggest caveat: shit is expensive! A full 32k token request will run you like $2, if you do dialog back and forth you can rack up quite the bill quickly.
If it was 10x cheaper, I would use nothing else, having a large context window is that much of a game changer. As it stands, I _very_ carefully construct the prompt and move the conversation out of the 32k into the 8k model as fast as I can to save cost.
How it calculates the price? I thought that once you load the content (32k token request / 2$) it will remember the context so you can ask questions much cheaper.
It does not have memory outside the context window. If you want to have a back-and-forth with it about a document, that document must be provided in the context (along with your other relevant chat history) with every request.
This is why it's so easy to burn up lots of tokens very fast.
I already do this with the current context limits. I include a few of my relevant source files before my prompt in ChatGPT. It work’s unreasonably well.
Something like the following
Here is the Template class:
…
Here is an example component:
…
Here is an example Input element:
…
I need to create another input element that allows me to select a number from a drop down between 1/32 and 24/32 in 1/32 increments
You could see the legal impact of your actions before you take them. You could template out an Operating system and have it fill in the blanks. You could rewrite entire literary arts, in the authors style, to cater to your reading style or story preferences.