Oh hello again. It's the 9th of June already. How did that happen? I'm blaming part of the haze of the last few days of a bad chest cold I got that drowned me in a sea of nyquil + sleeping + reading.
I have until the end of June until I move on to my next project. Galley is almost done it's MVP - what remains is to setup an email system and to import some recipes. I've been putting off setting up the email system because I know it will have to cover a lot - user registration and confirmation, email and password changes, as well as other stuff I'm sure. From my brief scanning, the elixir libraries for email management have definitely improved since I last perused them sometime in 2018.
Oh, and then I have to figure out deploys - something I should have probably set up as soon as I started running things locally. Oh well!
But for now, I'm celebrating having just finished setting up image uploading in Galley. Whenever someone contributes a recipe they now can upload up to four images that get stored in
S3. This is a seemingly small feature, but it was challenging. Here is what it entailed:
- Learning how to setup image uploads using live view.
- I recommend just using these docs to do image uploading if you are using liveview (rather than using an existing package)
- Figuring out how and where to store the image metadata in the database (I ended up using an
- Learning how to delete images once they had been submitted.
- Then, learn how to do image uploading to an external service
- The liveview docs have a guide that extends the initial one linked above, for working with an external uploader.
- Then, setting up a way to make sure images that are deleted from recipes are also deleted from S3.
Phew, that was a lot. The last point was a bit frustrating because I initially set out to do it without using a library, but in the end I resorted to using ExAws just for a single function - to run a
DeleteObject api call.
I set out to not use a library because the external uploading guide linked above includes a small link to a "SimpleS3Uploader" which is a 0-dependency elixir module specifically for uploading files to s3. At the time, I read through the code and found it pretty intimidating. Nevertheless, I plugged it in as per the external-uploading guide in the hexdocs and it all worked.
So when it came time to try and figure out how to delete objects… I tried to just install HTTPoison and hope I could figure out how to do it form the S3 rest api docs. It took me a while to figure this out. First, I had to understand how to convert the description of the endpoint into what HTTPoison could request:
DELETE /Key+?versionId=VersionId HTTP/1.1 Host: Bucket.s3.amazonaws.com x-amz-mfa: MFA x-amz-request-payer: RequestPayer x-amz-bypass-governance-retention: BypassGovernanceRetention x-amz-expected-bucket-owner: ExpectedBucketOwner
That was simple - but I kept getting a 403 - access denied. I tried messing around with bucket policies - and after changing the
principal field in S3 I was able to delete objects! However, I knew this was insecure as anybody could delete things in my bucket. After a while I was stumped - so I posted a question on stack overflow (which I haven't done in… four or five years?). Someone responded overnight after I went to bed - I was embarrassed to find out that I had simply failed to look into how the endpoint was to be authenticated. I had just assumed that the endpoint documentation (previously linked) was listing all the stuff I needed to provide (including a lack of authentication).
I blame the chest/head cold I had.
I think this is something common to API documentation - there will be an assumption that all the calls will require an authorization method, which will usually be described at the beginning of the documentation - that way it doesn't have to be described for every single API endpoint that requires auth.
Ok, so I head over to the S3 docs on authentication. Oof - so it seems that you can't just include an authorization key in the header so much as hash your authorization key with a bunch of other stuff - dates, content type, etc. All this gets munged into a signature (related, see definition of HMAC).
Then I returned to the SimpleS3Uploader and realized a lot of that zero-dependency-goodness was handling this sort of endpoint call. So, I tried to leverage some of that code (the hashing of the signature, the setting of up date's that go into the request etc etc). It still didn't work.
So I installed ExAws and it worked. Sigh.
The ExAws repo has this in the readme:
ExAws is now actively maintained again :). It's going to take me a while to work through all the outstanding issues and PRs, so please bear with me.
This package, like so many, has a burden to be maintained. And this package handles so much stuff related to what AWS can do. I only really need to do two things - upload objects and delete objects.
Unfortunately, I hit a wall. This is maybe just a lesson in programming (again) to take breaks. I could probably take a break and come back and figure out how to hand roll the s3 http request. But when you hit the wall, you have to decide - do I want to die on this hill (or more appropriately, at the foot of this wall?) - or do I use an existing working solution and do I move on and get this project done?
So, hopefully Galley will be one of the quarterly projects that I actually get "done" and published this year (within it's time frame of 3 months).
Next up… emails and deployment…
Thanks for reading!