#S3
🚀New blog post! Learn how to efficiently move your S3 files with the AWS CLI. #AWS #S3 📁🔄
https://lpembleton.rbind.io/posts/aws-cli-s3/
I have a very brief favour to ask: could anyone check how the #AWS #S3 #API responds when one tries to delete an object that simply doesn't exist, in an object-versioning-*disabled* bukkit?
Do you get an HTTP 204 ... or something else? #lazyweb
(I do know how lazy this sounds - but I'm in the middle of a hike (without a decent computer!) and have had a fab idea that hinges on knowing about the API response!)
#S3, #S4: Störung beendet (Stand 30.09., 08:57 Uhr)
30.09.2023, 08:25 Uhr - 10:00
Es kann noch zu Verspätungen und Teilausfällen kommen aufgrund eines vorangegangenen Polizeieinsatzes in Langen.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143309%26view=trafficmap%26trafficmap_date=30.09.23#map
#S3, #S4: aktuelle Störung (Stand 30.09., 08:28 Uhr)
30.09.2023, 08:25 Uhr - Dauer unbekannt
Es kommt zu Verspätungen und Teilausfällen aufgrund eines Polizeieinsatzes in Langen.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143309%26view=trafficmap%26trafficmap_date=30.09.23#map
#S1, #S2, #S3, #S4, #S5, #S6, #S7, #S8, #S9: aktuelle Störung (Stand 30.09., 03:35 Uhr)
30.09.2023 02:54 - Dauer unbekannt
Wegen kurzfristigem Personalausfall kommt es im gesamten S-Bahn Netz zu vereinzelten Zugausfällen.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143306%26view=trafficmap%26trafficmap_date=30.09.23#map
#S1, #S2, #S3, #S4, #S5, #S6, #S7, #S8, #S9: aktuelle Störung (Stand 30.09., 03:35 Uhr)
30.09.2023 02:54 - Dauer unbekannt
Wegen kurzfristigem Personalausfall kommt es im gesamten S-Bahn Netz zu vereinzelten Zugausfällen.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143306%26view=trafficmap%26trafficmap_date=30.09.23#map
Version 4.4.0 of syslog-ng is now available in #FreeBSD ports:
https://www.freshports.org/sysutils/syslog-ng/
Among others it adds support for #S3 as a destination. You have to enable #Python modules for this.
On my way to #manchester to join AWScomsum! 🛫
I look forward to meet amazing people there, learn from the speakers and share some learnings about #S3 pre-signed URLs!
If you are there too come over for a chat 🤝

#S3, #S4, #S5, #S6: Einschränkungen in den Nächten vom 26.09. bis 30.09.
26.09.2023, 20:00 Uhr - 30.09., 06:00 Uhr
Es kommt aufgrund von einem unterbesetzten Stellwerk in "Frankfurt Südbahnhof" zu Einschränkungen.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143237%26view=trafficmap%26trafficmap_date=26.09.23#map
Version 4.4.0 of syslog-ng is now available. New features include:
- #Grafana #Loki destination
- #S3 destination (implemented in #Python)
- compression in the http() destination
- @OpenSearchProject destination
For more details read the release notes at: https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.4.0
#S1, #S2, #S3, #S4, #S5, #S6, #S8, #S9: aktuelle Störung (Stand 26.09., 06:56 Uhr)
26.09.2023 05:56 - Dauer unbekannt
Es kommt zu Verspätungen und Ausfällen auf allen S-Bahn-Linien außer der S7. Die Linien S5 und S6 fahren im 30-Minuten-Takt.Grund dafür ist eine Signalstörung im Frankfurter S-Bahn-Tunnel.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143219%26view=trafficmap%26trafficmap_date=26.09.23#map
Hello MastoAdmins, I need help! Buddyverse.xyz is currently serving files from link.storjshare.io. However, I want to use my own domain, "cdn.buddyverse.xyz," now. Is there any potential issue if I change S3_ALIAS_HOST at this point? The old files should be accessible from their previous URL. Do I need to perform any migrations or similar tasks? Thank you! 😊
Edit: I simply changed S3_ALIAS_HOST and everything is working fine so far
#MastoAdmin #Fedimin #FediAdmin #Mastodon #Fediverse #Askfedi #Help #Sysadmin #S3
Kennt sich jemand mit dem ObjectStorage von IONOS aus? Was kommen da für Traffic-Kosten, bei einer durchschnittlichen Mastodon Instanz, auf einen zu? #mastoAdmin #admins #s3 :boost_ok:
I tried firing up a #Windows instance in #AWS #EC2. Super-easy, fast, and doesn't cost anything when not in use (an $0.12/hr when in use). You can snapshot the #EBS volume to #S3 and delete/restore it later to save more. If you’re like me and only need access to Windows periodically, it's a great way to go.
#S3, #S4, #S5: Störung beendet (Stand 22.09., 09:038 Uhr)
22.09.2023 10:30
Es kann noch zu Folgeverspätungen kommen aufgrund eines vorangegangenen Polizeieinsatzes zwischen Frankfurt-Westbahnhof und Rödelheim.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143134%26view=trafficmap%26trafficmap_date=22.09.23#map
Hey #mastoadmin s , need your help.
After I upgrade my server to 4.2.0, and when I try to set
- S3_STORAGE_CLASS=ONEZONE_IA
I constantly start to get
Aws::S3::Errors::InvalidStorageClass
errors.
This is same with the STANDARD as value as well.
Using Scaleway as a backend, and I know they have this feature on my region (FR-PAR). https://www.scaleway.com/en/docs/storage/object/api-cli/object-operations/#putobject
Have you experienced this after the update? If so, can you help me please?
Thanks!
Boost willkommen :neocat_sign_thx:
Kleine Info noch:
Die App S3Drive gibt's jetzt auch im Flathub :neocat_sign_yip:
https://flathub.org/apps/io.kapsa.drive
Desweiteren steht ein Appimage und DEB Paket per GitHub bereit.
https://github.com/s3drive/app
Im Microsoft Store ist es schon länger vertreten, eine Exe findet sich aber auch bei GitHub, genau wie die Mac Version.
Generelle Informationen auf der Homepage
#S1, #S2, #S3, #S4, #S5, #S6, #S8, #S9: aktuelle Störung (Stand 19.09., 19:26 Uhr)
19.09.2023, 19:25 Uhr - Dauer unbekannt
Es kommt zu Verspätungen und Ausfällen. Grund dafür sind zum einen betriebsfremde Personen im Gleis und zum anderen zwei Rettungseinsätze in Rödelheim und in Bad Soden.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143055%26view=trafficmap%26trafficmap_date=19.09.23#map
#S3: aktuelle Störung (Stand 19.09., 09:00 Uhr)
19.09.2023, 08:45 Uhr - Dauer unbekannt
Es kommt zu Verspätungen aufgrund einer Störung an der Strecke bei Darmstadt Hauptbahnhof.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143043%26view=trafficmap%26trafficmap_date=19.09.23#map
J'ai corrigé un problème sur l'instance Mastodon de #FédiQuébec ces dernières heures.
Des fichiers médias associés à des messages datant d'environ un mois et s'étalant sur une période d'environ une semaine de certains comptes distants étaient manquants. C'est maintenant corrigé.
26 Go de plus dans le bucket #S3.
#S3, #S4, #S5, #S6: Störung beendet (Stand 17.09., 09:21 Uhr)
17.09.2023 10:00
Es kann noch zu Folgeverspätungen kommen aufgrund einer vorangegangenen Störung wegen eines Gegenstandes im Gleis im Bereich Frankfurt-Lokalbahnhof.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143001%26view=trafficmap%26trafficmap_date=17.09.23#map
#S3, #S4, #S5, #S6: aktuelle Störung (Stand 17.09., 09:16 Uhr)
17.09.2023 - Dauer unbekannt
Es kommt zu Verspätungen und kann zu Teilausfällen kommen aufgrund eines Gegenstandes im Gleisbereich im Bereich Frankfurt-Lokalbahnhof.
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2143001%26view=trafficmap%26trafficmap_date=17.09.23#map
Pour #FédiQuébec, j'ai actuellement trois VPS "starter". Un pour le serveur courriel et le site web, un pour le serveur d'authentification et un pour Mastodon.
Si les deux premiers suffisent à leurs tâches, Mastodon manque d'espace disque malgré l'utilisation d'un #S3. Ça devient même compliqué de faire les mises à jour faute d'espace pour builder les nouvelles images docker.
Il manque aussi de RAM pour pouvoir utiliser ElasticSearch.
Bref, je songe louer un 4e VPS un peu plus gros...
#S3, #S4, #S5, #S6: Störung beendet (Stand 16.09., 19:52 Uhr)
16.09.2023 - Dauer unbekannt
Es kommt zu Folgeverspätungen aufgrund eines vorangegangenen Polizeieinsatzes in "Frankfurt Lokalbahnhof",
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2142996%26view=trafficmap%26trafficmap_date=16.09.23#map
#S3, #S4, #S5, #S6: aktuelle Störung (Stand 16.09., 19:20 Uhr)
16.09.2023 - Dauer unbekannt
Es kommt zu Verspätungen und ggf. kurzfristigen Teilausfällen aufgrund eines Polizeieinsatzes in "Frankfurt Lokalbahnhof"
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2142996%26view=trafficmap%26trafficmap_date=16.09.23#map
#S1, #S2, #S3, #S4, #S5, #S6, #S8, #S9: Störung beendet (Stand 16.09., 16:26 Uhr)
16.09.2023 - Dauer unbekannt
Es kommt zu Folgeverspätungen und ggf. kurzfristigen Teilausfällen aufgrund von Personen im Gleis in "Frankfurt Konstablerwache"
http://www.rmv.de/auskunft/bin/jp/query.exe/dn?selectedhimid=2142988%26view=trafficmap%26trafficmap_date=16.09.23#map
Anyone who tried out #fybe ? Its a brand owned by #contabo with relatively good #s3 / #objectstorage pricing.
I want to give it a shot because #b2 is quite costly on API calls, but I have been stuck for multiple hours in “provisioning”
Controversial topic time at AWS Bites! 😲 S3 is not a filesystem, but AWS made a new way for you to pretend that it is. Good idea 💡 or footgun-enabler 🙈?! Let's find out! https://www.youtube.com/watch?v=ArM0XaAwkrY #AWS #S3 #Buckets #filesharing #cloudmigration
Have some #groovy running in an #ECS container on #Fargate. The #AWS #S3 client builder should be using the default provider chain but does not appear to be credentialing correctly. A subsequent object upload fails with a 403 Forbidden. The ECS task and execution role otherwise has the correct #IAM permissions and I'm able to upload a file directly from within the container.
There's some standard #AWS #security guidance out there that is (in my personal opinion) incredibly stupid. It is usually reported as: “Ensure MFA Delete is enabled on S3 buckets”. And you can find references to it in AWS CIS Benchmarks 1.4 level 1, under the heading “2.1.3 Ensure MFA Delete is enable on S3 buckets”.
Raise your hand if you do this. Put your hand down. You do not do this. No one should do this ever. Let me summarise the documentation on this feature. Here are the steps:
- Login as root. (Error! We NEVER EVER DO THAT)
- Provision Access Key / Secret Key to root and copy it to your favourite command line, like a terminal on your laptop or an EC2 instance. (Holy shit! We NEVER EVER DO THAT)
- Go get the root user’s physical or virtual second factor (You're just fucking with me now.)
- Issue the
s3api put-bucket-versioning
API call from your root AWS CLI with root’s access keys and root’s MFA token (💀) - Undo all those things: revoke the access keys, deprovision the virtual MFA token from the admin’s phone, whatever.
- If you have any reasonable AWS security in place monitoring root account usage, you now go close out a dozen false alarm alerts going off because the root account was used for something. Enjoy the rest of your day closing tickets.
And you have to do this for every bucket!? Every time someone creates a new bucket? This advice is like 7 years old. It should never have been mindless and recommended to all buckets.
The goal of this was about making sure that adversaries couldn't delete record of malicious activity or delete data they shouldn't. Since CloudTrail was often stored in S3, you needed to protect your S3 objects from unauthorised deletion. The capability that you really want is called S3 Object Lock. It doesn’t require root access keys and root’s MFA and the CLI. It stops object from being deleted, which is what this is all about. It has a few modes of operation, with lifecycle policies, and reasonable behaviours.
I can’t believe that the CIS Benchmarks still call for MFA delete. To me it goes to show that they don't actually know how the platform works. The AWS conformance pack for S3 no longer has MFA delete in it. They still mindlessly recommend S3 versioning (which I think is only useful in some rare use cases, but whatever), but they don’t recommend MFA delete any longer.
I hope people aren’t out there doing #S3 #MFA delete in 2023. It is stupid.
Let's say I inspired you and you decided to turn it off where you might have turned it on. Good luck with that. (a) you can't see it in the AWS console (you can only get the status from the AWS CLI), and (b) you gotta do the same dance above (root's keys and MFA) to turn it off.
File storage in #AWS is simple: just use Simple Storage Service (#S3)
...unless you are using Elastic Cloud Compute (#EC2); in which case you can use instance volumes
...unless you want the files to persist when EC2 restarts; in which case you must use Elastic Block Store (#EBS)
...unless your EC2 instances are in different availability zones; in which case you need Elastic File System (#EFS)
SEE!? SIMPLE! 😕
Adding S3-compatible cloud storage to Pixelfed
Pixelfed is a federated photo sharing service that is an alternative to Instagram and uses ActivityPub to share posts across other services such as Mastodon, Pleroma and indeed WordPress. It is one of the more mature Fediverse apps and has been in continuous development for s
https://simongreenwood.me.uk/adding-s3-compatible-cloud-storage-to-pixelfed/
#CloudStorage #Fediverse #Pixelfed #Technical #Uncategorized #fediverse #pixelfed #s3
Greetings, Fediverse. Introducing BonitoCache, optimized for S3 Buckets reads. Crafted in Go, it allows configurable RAM and disk limits. Preliminary data shows a 95% bandwidth reduction to the S3 endpoint.
Optional InfluxDB integration is available for performance monitoring, providing in-depth metrics.
Your feedback is invaluable. For further information, visit our GitHub repository: https://github.com/i5heu/bonito-cache
C'est fait. À première vue, c'est beaucoup plus rapide.
#ObjectStorage #S3 #OVH
Managed to migrate from one @portainerio host to the other using the #S3 backup feature via #Synology #ObjectStorage.
The instance had 4 more agent-connected hosts, that were all registered with 0 issue and all compose deployments are working!
https://www.blackvoid.club/backup-portainer-with-synology-c2-object-storage/

🤯 Je galère comme pas permis pour uploader un fichier depuis mon programme #nodejs / #typescript vers mon bucket #minio / #s3
Ça me saoule...
![Result of ./s5cmd help
NAME:
s5cmd - Blazing fast S3 and local filesystem execution tool
USAGE:
s5cmd [global options] command [command options] [arguments...]
COMMANDS:
ls list buckets and objects
cp copy objects
rm remove objects
mv move/rename objects
mb make bucket
rb remove bucket
select run SQL queries on objects
du show object size usage
cat print remote object content
pipe stream to remote from stdin
run run commands in batch
sync sync objects
version print version
bucket-version configure bucket versioning
presign print remote object presign url
help, h Shows a list of commands or help for one command](https://s3.eu-central-2.wasabisys.com/mastodonworld/cache/media_attachments/files/110/977/631/878/299/307/small/0791608ab2b50167.png)
Auf #Firefish sieht es gut aus mit dem #S3 #Objektspeicher. Jetzt synchronisiere ich die sich bisher auf #Mastodon angesammelten Dateien, bei gestoppter Instanz.
Da ich den Cache heute auf einen Tag begrenzt habe, sind es auch nur 23 GB 😉
Wenn dies beendet ist, starte ich Mastodon mit der geänderten Konfigurationsdatei neu, und hoffe, dass alles glattgelaufen ist 😀
Look out #ILGISA, I did it again. I submitted 4 things to present. And ILGISA just keeps saying yes! One of us will learn at some point.
For you IL #GIS folks, if you are interested in
- Using #Survey123 for restaurant inspections
-Using the #OSMUS Tasking Manager to oversee #OpenStreetMap editing campaigns
-Creating local extracts of OSM data with #osm2pgsql
-Using Amazon #S3 for a "serverless" basemap tile option
Then you should come to ILGISA this year!
☁️ A Look at Using Elixir Streams, Elasticsearch & AWS S3
➥ Guy Argo | HackerNoon
https://hackernoon.com/a-look-at-using-elixir-streams-elasticsearch-and-aws-s3
Advice please?
I'm trying to get my head around #S3 storage costs. I think I pay for the amount of data in storage per month, AND data transfer costs. As an example, Amazon allow free data transfer IN but charge per GB out.
How do you estimate what the transfer costs could be?!
Any tips/thoughts/good providers? I just used Amazon as an example.
[ Mountpoint for Amazon S3 Now GA to Access Bucket Like Local File System ]
https://www.infoq.com/news/2023/08/mountpoint-amazon-s3-ga/ #AWS #S3 #MountPoint
「 #AWS 、 #Linux から #Amazon #S3 をファイルシステムとしてマウントし利用できる「Mountpoint for Amazon S3」正式公開」: Publickey
「Amazon S3のバケットをまるでローカルのファイルシステムのように扱えるようになります。」
https://www.publickey1.jp/blog/23/awslinuxamazon_s3mountpoint_for_amazon_s3.html
🗞 New episode of Changelog News!
🤔 Armon Dadgar announces #HashiCorp's #BSL future
🏆 Matt Rickard on why #TailwindCSS won
🕴️ WarpStream is like #Kafka directly on top of #S3
🧩 Vadim Kravcenko’s guide to managing difficult devs
📢 Russ Cox gives an update on #golang 2
🎙 hosted by @jerod
#aws #s3 uses #rust to define its critical components https://www.amazon.science/publications/using-lightweight-formal-methods-to-validate-a-key-value-storage-node-in-amazon-s3
We are working on adding support for authentication with #S3 backends using temporary credentials from #STS obtained from an ID token from an #OpenID connect provider. Let us know if you know about a common deployment scenario (on #AWS or other provider). We are looking to test the interoperability of our implementation and provide sample connection profiles in https://github.com/iterate-ch/profiles/issues/55
There was a loud 'WHAT THE ACTUAL FUCK!?" to be heard in this house after Carrie Paige screamed her terrifying scream and it was lights out in the Palmer house.
I feel so sorry for first timers and what they have to go through watching it. Fuck you, Lynch 🥺.
How about that ominous whooshing guys and girls. #twinpeaks #s3
detailed and genuinely useful blog on how we built our internal data warehouse on #ClickHouse at ClickHouse
https://clickhouse.com/blog/building-a-data-warehouse-with-clickhouse
#ClickHouse #DataWarehouse
starring
#Superset #AppFlow #Airflow #Redis #Docker #opensource #RDS #PostgreSQL #DBT
and including data from
#Salesforce #AWS #S3 #BigQuery #M3ter #Segment #Marketo
Je suis à la recherche d'un #MastoAdmin ou des #MastoAdmins qui utilise-nt un service #ObjectStorage #S3 et qui voudrai-en-t bien me donner un p'tit coup de main pour mettre ça en place.
En 2018, j'avais suivi ce guide (https://stanislas.blog/2018/05/moving-mastodon-media-files-to-wasabi-object-storage/) pour utiliser #Wasabi sans #ReverseProxy, mais comme ce n'est plus possible (voir: https://pouet.fedi.quebec/@manu/110658367412016709) j'essaie avec #OVH (S3 Standard) et un reverse proxy inspiré de ça : https://docs.joinmastodon.org/admin/optional/object-storage-proxy/
Merci de partager !
"Hijacking S3 Buckets: New Attack Technique Exploited in the Wild by Supply Chain Attackers"
"Without altering a single line of code, attackers poisoned the NPM package "bignum" by hijacking the S3 bucket serving binaries necessary for its function and replacing them with malicious ones"
@Kaea
Jeśli chcesz ograniczać koszty, inwestujesz we własną infrastrukturę, na której oferujesz usługi. Bo zabawa z cloud od korpo zawsze wychodzi drogo :)
FYI: Fundacja @ftdl posiada własną mini serwerownię w Krakowie, zasilaną w dużej mierze z fotowoltaiki. Oraz własne zespoły admińskie, moderacyjne, pomocy i R&D.
Prowadzi instancje #Mastodon, #Matrix, #Nextcloud, #Kbin, #S3 oraz wspomaga projekty infrastrukturą jak Noevil.pl, Kooperatywy Parasol i Open Food Network, SVMetaSearch. Admini wspierają również największą instancję /kbin.social
Wszystko to bez abonamentów $, reklam czy państwowych dotacji.
To inna skala, ale gdy jest wola, to się da :)
@manganapp
Cool! @palmin just published a #S3 #ObjectStorage app for #iOS and #MacOS.
After his amazing #git client https://workingcopy.app/ and his #SSH terminal/files-by-SSH app https://secureshellfish.app/ I am sure we can expect a great app. It‘s called „S3 Files“ and you can read more at https://s3files.app/
If you've been following my recent #MinIO troubles: there's been a new development!
They have...
...decided to drop any hint of cooperation and lock down the issue.
And you know what: I'm feeling good about this. I can't drop MinIO for work, but personally, I won't touch the project with a ten-foot pole.
But here's a final reminder: if you're using encryption, make sure that you're running the latest version of MinIO's KES server. Otherwise your data could be corrupted.
From a #matrix room, #AWS #S3 egress costs are like a #ransomware 😂
Heads-up, #NixOS Foundation seems to need help and community input on a developing NixOS cache S3 situation: https://discourse.nixos.org/t/the-nixos-foundations-call-to-action-s3-costs-require-community-support/28672
Please don't hesitate to bring new points and interesting things which could help steering the situation!
I already gave my (somewhat personal) view in the second post.
#aws just emailed to say some client is accessing stuff in #s3 using old tls, and if i want more information i should set up something called #cloudTrail. i'm twenty minutes into the docs and console and i've decided i no longer care about the client using old tls.
HIRING: Senior Principal Engineer, Encryption / Sydney, Australia https://infosec-jobs.com/J30998/ #InfoSec #InfoSecJobs #Cybersecurity #jobsearch #hiringnow #CyberCareers #Sydney #Australia #APIs #AWS #Azure #Cloud #Encryption #GCP #Java #Lambda #Microservices #S3 #Strategy
I wrote a blog post about my #Mastodon #S3 migration from #Scaleway to my own #Minio media storage:
Switching Mastodon from Scaleway S3 to self-hosted Minio S3 media storage
https://thomas-leister.de/en/switching-mastodon-from-scaleway-to-selfhosted-minio-s3/
Maybe it is an interesting read for some of you. 🙂
@mattblaze
Wasn't there a person who was the whole #AWS #S3 on #Bluesky? I'm not sure their verification method is more secure at all.
T-Shirts for #ICSE2023! I am hiring multiple Post-Docs in software testing and analysis who want to combine their independent research agenda with our #S3 techniques. Ask wearers (+ me) at #ICSE for details, and check out the S3 ERC project at https://cispa.de/s3
@marcthiele @normanposselt Same here … prepared for selling #VanMoof #S3
I think I'll be able to close the Scaleway S3 bucket soon. Saves us another 9 € / month.
@charlotte @erk @encthenet EXACTLY!
#Amazon has a vested interest to act a bit more longterm.
Unlike #Microsoft's #EEE https://en.wikipedia.org/wiki/Embrace%2C_extend%2C_and_extinguish#Examples_by_Microsoft ] they want #S3 to become the de-facto standard, as they already dominate #CloudComputing and making shit easier on their platform will only work if it isn't exclusive.
Even if that means Microsoft ( #Azure ), #OVH, #Hetzner and even #Proxmox can do the same...
It also fixes a lot of issues #iSCSI has...
Also, requiring people/companies to pay doubly ensures that not everyone's voice is heard. It's already time consuming enough, but having to pay thousands of dollars for the honor of spending your free time to try to fight against corporations isn't something most OSS people want to do.
There's a reason most RFCs are corporate sponsored.
Except that there are other forums that produce standards that are open/free. I've participated in an OASIS standard. In checking to see if ecma made all their standards free, I came across this:
https://www.nist.gov/standardsgov/standards-organizations-offer-free-access-their-standards
Quite a few orgs making their standards free (though a number of the ones on that list have limitations).
Bilan garage après 6 mois d’utilisation :
* ça marche au global vachement bien ;
* en mode simple instance, il faut vraiment avoir du stockage ultra fiable derrière si on ne veut pas de mauvaise surprise ;
* en mode multi-instance, c’est hyper résistant même si le stockage/réseau ne sont pas particulièrement fiable derrière.
it would appear #aws #s3 bucket changes are rolling out in us-east-2 right now
https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/
#AWSCognito and #AWSS3 allow for subdomain hijacking / squatting. Please doublecheck spelling when communicating with #cognito and #s3 subdomains. #cybersecurity
@cldellow So, one issue is, the way #AWS exports the snapshot to #S3, it splits up large DBs into multiple gzipped parquet files (See screenshot). I'm not sure if there's a joining process that needs to happen first, or if the tool is smart enough to just do it.
I can almost certainly send you a file to show you what’s what; dm me an email?
Thank god I have over a decade of #Python experience, so I have the skills necessary to get a basic working env.
Anyway, pipenv was the hammer this time, so now I have datasette installed, hooray!
Next: getting all the parquet files out of #S3 and onto my local machine.
Here, another hiccup! `aws s3 cp s3://{bucket}/{key}` returned “fatal error: An error occurred (404) when calling the HeadObject operation”
I must be doing _something_ wrong, because @panic's Transmit got 'em just fine.
3/n
Probably there have been a couple of these polls around, still I would like to know:
What object storage are you using for your #Mastodon #Instance (if any)? What are the advantages and disadvantages?
I am currently using #AWS #S3 which works great, but is probably not the cheapest option. Boosts appreciated. :boost_ok:
On its face it's just a PUT, but there is some auth and signature stuff to be dealt with. Not a ridiculous amount of stuff, just some parameter normalization, SHA1-HMAC, etc, but nice if someone already got the details right.
The context is an integration scenario where I have an API I want to receive data on, but the source only implements #S3 and doesn't offer any generic http methods.
It's conceivable that #boto could be "run backwards", I'm gonna look there.
Bon voilà, la méthode exhaustive, pas du tout scriptée parce flemme pour migrer des données d’un stockage local à un stockage S3 pour Nextcloud.
https://blog.libertus.eu/migrer-nextcloud-dun-stockage-local-a-s3/
En fait, par rapport à la version qu’on trouve le plus souvent sur le net, il manque essentiellement ce qui se passe dans la table `oc_mounts`.
J’aimerais tester pour voir si c’est possible de migrer compte par compte (à vue de nez, c’est impossible mais on ne sait jamais…).
@ghaering @jeffbarr
I could read the 2,700 words for 28 manual steps, making sure I select the right region for each of my >120 buckets in >40 AWS accounts with >5 million objects--and I'm just one individual customer.
Or AWS could automate it for all customers, making sure that the experience is smooth and transparent. It wouldn't even matter if it took a year to run.
If encrypting all future S3 objects is appropriate, then doing the same for existing objects is a clear choice.
@jeffbarr
Next up: Encrypting all existing S3 objects transparently in the background?
@danie10@mastodon.social
I'm not sure what your asking. You mean a description of the product?
It's distributed S3 compatible object store, self hosted for a redundant solution.
For example, for a #Fediverse information store, say, a #PeerTube live and #VoD #streaming server, you can use #Wasabi and purchase affordable storage space to stream your videos from on their S3 compatible buckets, or you can purchase inexpensive Storage VPSes and deploy your own distributed #S3 storage solutions using #Garage for much less than paying for additional HDD storage space on your own compute instances.
All of your videos can be kept and streamed from your S3 storage space much more affordably than paying for larger HDDs on your #VPS or it costing you an arm and a leg with Amazon (ewww, yucky).
Garage provides the distributed high availability and is more affordable than even discount commercial #S3 storage providers like Wasabi.
https://garagehq.deuxfleurs.fr/
There's additional use cases on the project's website, which is more descriptive than their download page for the latest release at their git repo.
Purchasing additional VPSes with fewer computing resources that are really cheap and optimized as storage servers as opposed to adding additional HDD space, or purchasing extra volumes, on your existing VPS is much cheaper, and Garage provides that distributed high availability. You can create buckets and your other VPSes can also take advantage of that additional space, instead of, for example, creating NFS volumes on one (by comparison, expensive) server for your other VPSes to mount.
You can further distribute that storage space globally with #Garage, which is self-hosted S3 compatible and #FOSS.
So you have 4 VPSes with lots of RAM and CPUs in each one, but they have small HDDs too keep costs down. You can easily outgrow your storage needs with fresh #Friendica (https://Friendi.ca), #Calckey(https://codeberg.org/calckey/calckey), #Forĝejo (https://Forgejo.org), and #NNCP (https://www.complete.org/nncp/) servers, and upscaling those virtual machines with additional HDD space or attaching attaching additional storage volumes from your provider is expensive, and geographically homed.
In that scenario, you could purchase a single storage VPS with a huge volume, deploy Garage with a few S3 buckets to accommodate your Fediverse, Forĝejo, and NNCP servers, and meet all of your storage needs very affordably.
If those VPSes are scattered about the globe in different data centers or providers, you can deploy additional Garage instances, distributing those existing S3 resources globally, and increasing availability while reducing latency.
I suppose you could also embrace #Cloudflare to address that last issue (I'm kidding, of course. Cloudflare == Satan).
I know you're primarily playing with a lot of #smallweb, #IoT, and Home #LAN based projects - a single Garage instance in the cloud could deliver your content at scale and speed.
#tallship
⛵
.
Cloudflare reports 2GB more usage than Mastodon... Has anybody built a tool to understand the discrepancy? would love to avoid writing it myself.
OTOH a lower hanging fruit is applying the patch to delete avatars and headers from the cache.
Hello #mastoadmin s,
Some of the remote images are marked as "not available" for me, but when I click it it's there in the origin. Do I have to increase the S3_OPEN_TIMEOUT and S3_READ_TIMEOUT values (the default is 5 secs) ?
I'm using #idrive as backend if that would matter.
Anyone tried connecting fediverse projects to Storj?
Is it true? They claim: "Fast, secure cloud storage at a fraction of the cost."
#pixelfed #mastodon #misskey #pleroma #rebased #opensource #foss #fediverse #instance #microblog #activitypub #s3 #storage #storj #cloud
Oh yes mama!
#Publii fully supports #Cloudflare #r2 (with the custom #Amazon #s3 endpoint).
And when you attach your site to your Cloudflare domain your static site is now 100% served from Cloudflare with no origin server needed!
Hell yes!
I think something has gone sideways with my Digital Ocean Space.
Maintenant que #mastodon tourne ici, on va attendre un peu, voir ce que ça donne. Si c'est stable, et que mon infra n'est pas trop sous l'eau, on pourra envisager 2-3 #OrangePi 5, du NVMe, et monter un stockage #S3 via #minIO :). On verra en février, une fois les "bonnes nouvelles de fin d'année" passées....
So this happened. I managed to delete and/or lifecycle out some unwanted files on #Amazon #S3 so that in theory I am using less than 3 GB - ( although metrics show 685GB still) That is 682BG of hidden? files or something else that I don't understand.
Also this graph below shows number of objects has doubled.
I'm aware that lifecycle clearing out can take a few days but "didn't expect the spanish inquisition" to paraphrase an old joke. See graph below. I'm hoping to see a downwards trend :)
#mastoadmin sadly #backblaze b2 is banned in my country (most possibly for the stupidest reason the whole api domain got banned), so I'm looking for alternatives for #mastodon storage solution.
I've heard about #idriveE2 from #idrive . Has anyone used it, along with #cloudflare or similar CDN? How was your experience with #idrive #e2 ?
Or has anyone been using other storage solutions apart from #aws #s3 and #backblazeb2 ? E.g. #wasabi ?
Geek question here. I have 1.2GB of log files on Amazon S3 that I am trying to delete. I am being charged each month for these. I started using S3 in 2015 never imagined that logs would not rotate out.
I have tried deleting the logs and can seem to get about 400,000 of the suckers and then it stalls at about that level which is 400mb or more
Because of being billed in $US it all adds up. I'm trying to reduce my #Amazon bills #S3
Any tech especially in #NZ have any answers.
I’m migrating my #mastodon media storage from #AWS #S3 to #backblaze #B2 for cost reasons (if my maths is correct I should be able to cut my cost by about 82%).
But boy is it slow: it’s taken about 20 hours so far, transferring just 30 GB using rclone. 🙄
Thanks to to #nginx and #cloudflare there is at least no downtime though …
@ericmann #aws RDS/Aurora, in our case. Storage and compute being separate makes it very easy to spin up replicas or failover. Prod clones for testing/dev are fast, but snapshot restores still take too long.
#mysql and #s3 are the only two parts of our stack that aren't ephemeral. Replicating those allows us to jump around regions.
A couple of days later and my 4GB #Linode is still handling #mastodon with ease. Looks like it might be overkill for a single-user instance, in fact.
The biggest concern seems to definitely be media storage — save yourself the headache and set up #S3 (or something compatible like Linode Object Storage, DigitalOcean Spaces, Wasabi, etc.) *before* your new server has the chance to federate with others! And don't forget a cronjob to purge media older than a week. 💸