This is a chronicle of the games I liked the most in 2023. Some of them were released previously to 2023, I just bought and played them this year.
I've only finished two of these games.
Here they are, in no particular order. Note that the platform is whatever I own it on. Some of these games run on other platforms as well. Some I've bought on multiple platforms (desktop + Switch).
A word on Switch vs PC. These are the two gaming platforms I currently own. Before I buy a game I ask myself, where would I play it the most? The choice comes down to: is the game better suited for a small screen (visually and in terms of hardware), and does it work really well with a controller? If the answer is yes, then I'll most likely buy it on the Switch (also if the price is discounted). If the answer is no, then it's probably PC. I also play certain games on the Mac but only when I'm traveling for work and I don't already have it on the Switch.
Genre 1st/3rd person open world exploration, factory building and automation Platform Steam/PC
Take Factorio, add 3D and some unique mechanics, subtract a bunch of anxiety, and you get Satisfactory. The overarching story is also similar: you're stranded on an alien planet and need to survive and evolve the technology to the point where you can leave. I like it because I have a soft spot for sandbox games, especially ones that let you alter the environment permanently.
There's a lot to like here. There are multiple technology tiers (up to 9 at last count), each progressively harder to reach due to requiring increasingly complex materials and factories to produce said materials. There's a lot of open world exploration, first on foot, later with vehicles.
Exploration will take a lot of your time (something I enjoy), because you will be searching for various resources you'll need to progress. Along the way you'll discover crashed ships with scattered materials that you can pilfer and new technology that you can assimilate, but also alien denizens that'll get rather annoyed to have you around, not to mention toxic or radioactive areas that are off-limits until you can research appropriate PPE.
Then there are the factories, that start small much like Factorio only to escalate in complexity, until you end up with a veritable spaghetti of stacked assemblers, horizontal and vertical conveyor belts, gantries, and every industrial fixture you can imagine.
Automation is key here, and there'll be lots of it if you want to keep your sanity or evolve beyond the basics. There are also blueprints, which every respectable factory game has, allowing you to pre-define entire manufacturing flows and place them to be constructed in one click.
Vehicles are very important, but thankfully don't need a lot of investment to get started with. Eventually you'll have access to vacuum tubes (for quick travel), automated drones, trucks, trains, and more that I haven't yet unlocked.
I moved on to other games long before reaching the apex of the technology tree, but Satisfactory was... um... very satisfactory for the dozens of hours I played it. I'll very likely go back to it at some point. If you can catch a sale (or even without), it's a no-brainer. As I type this, it's $16.49 (45% off).
The good - Easier than Factorio, yet the 3D world makes it feel vast. The world is static (as in not procedurally generated) but huge, so there's a lot to explore before you get bored. Unlike Factorio, you can build vertically, meaning that you can stack related manufacturing facilities more efficiently. A lot of things to build. Combat is not generally dangerous if you keep up with the tech tree. Much fun at all levels. The bad - It can get rather grindy at later stages. Just like Factorio, if you don't plan correctly you'll end up with factory spaghetti. Might require searching online for advanced tips & tricks. The ugly - It's still in early access (but new content has been constantly released).
Genre Action RPG Platform Steam/PC
Simply put, Last Epoch is a spiritual successor to both Diablo 2 (the best Diablo ever) and Path of Exile. As a big fan of both these games, this was a no-brainer.
LE is a fast-paced RPG that puts you in the middle of the action from the get-go. There are 5 classes with 3 specializations each, for a total of 15 combinations that all play uniquely. But more importantly, each skill in the game has its own tree, which means you can make the same skill act radically different from the base skill, without factoring in synergies with other skills and the equally expansive class skill tree.
You'll progress quickly through a story-packed campaign to reach the real meat-and-potatoes which is the endgame. Here, you'll have the option of 3 or 4 different mechanics in the form of zones (called monoliths) and dungeons with progressive difficulty and random stats (in a similar fashion to PoE), and some endless mode thrown in.
Itemization is pretty good, certainly better than Diablo 4 and perhaps on par with Diablo 2, with some original twists thrown in there. I'm not an expert, but all I can say is that it felt good to me, compared to Diablo 4's items which seemed boring af.
Bottom line - if you're thinking about buying Diablo 4, save $35 and buy Last Epoch instead. There's a lot more bang for your buck here.
The good - A great spiritual successor to both Diablo 2 and Path of Exile. A lot of fun leveling up, and the end game is pretty damn good too. Good itemization with some original ideas. Powerful-feeling classes and specializations, each with distinct gameplay. Deeply customizable skills. Affordable and painless respecs. The bad - For min-maxing and a chance to push higher difficulty monoliths at the endgame you'll end up researching builds online. The ugly - The endgame grind. Missing specializations when I played it - now they're all in the game with v1 released.
Genre Survivors-like retro roguelike autobattler Platform Steam/PC
Remember Vampire Survivors? It's the autobattler roguelike with pixelated graphics that spawned a bazillion similar copycats. Halls of Torment is one of those clones, but stands out above most.
HoT brings a few unique ideas to the table. Chiefly among them are the items. You'll acquire items during runs, usually after defeating (mini)bosses and completing events. You'll then carry those items to a well that transports the item to the surface. Back at your camp, you'll be able to buy the item from a vendor. Once you've bought it, you can equip it on any of your characters.
If the previous paragraph didn't make much sense that's because item retrieval isn't very well explained and you'll have to figure it out for yourself. If you ask me, it's not the smartest way to extract an item from a dungeon, but at least it's a unique take.
Going back to the items themselves, each character has several slots (ya know, the usual head, chest, etc). Items confer mostly unique, class-specific, or skill-altering abilities rather than flat stat boosts, while others are agnostic and can benefit any class or skill. So that's good. Oh, once you've purchased an extracted item from the vendor, you can equip the same item on multiple classes.
You unlock different classes and skills as you progress. There are the classic melee, ranged, magic, summoner, beastmaster, etc types. Like any good Survivors clone, you are presented with 3 skill choices whenever you level up during a run.
But Halls of Torment brings a certain Diablo 1-like vibe to it that is hard to resist. The gritty but detailed pixelated graphics, combined with the scaled-up characters and monsters (again, compared to Survivors) and the itemization, make it feel like a veritable child of Diablo 1 and Vampire Survivors.
The fact that it I can play it on PC with an Xbox wireless controller is an added bonus.
If you like Vampire Survivors, get Halls of Torment.
The good - Unique take on the survivors genre. Very good art reminiscent of 90s RPGs. Solid and satisfying gameplay. Unbeatable price - only $5, often on sale for $4. The bad - You will die. Many times. Item retrieval is weird. Itemization is not really my cup of tea, though interesting nonetheless. The ugly - It's still in early access (but this only means a lot of content is yet to come).
Genre Survivors-like retro roguelike autobattler Platform Steam/PC
Take what I said about Halls of Torment above and apply it to Death Must Die, but throw in some Hades vibes in there as well.
You see, much like Hades, in Death Must Die your ultimate goal is another head honcho - Death itself. Along the way, every time you level up a random god will give you a choice between 3 abilities based on their own alignment (the God of Fire will give you fire abilities, while the God of Thunder... well, you get it). There are synergies between the various elements and alignments, of course, and this makes for some powerful combos if you get lucky.
Once again, there are various classes that you unlock as you progress through the (somewhat) sparse storyline.
There's also itemization, but a lot closer to a standard RPG this time. Each character has a paperdoll with equipment slots, as well as an inventory where they store excess items. There's also a shared stash back at the base. The items themselves are a lot more stat-based than Halls of Torment, but they are also color-coded white-blue-purple-etc. Some of the more special ones have skill-altering stats.
Thankfully you can equip items as soon as you find them, but certain items are class-specific.
This one's also playable with a wireless Xbox controller on a PC.
All in all, Death Must Die is a great addition to your Survivors portfolio.
The good - It takes the good parts from several games and combines them into an engaging survivors-like experience. Very satisfying combat with a variety of distinct builds, not just from the different classes, but also from the random gods you'll encounter. Pretty good itemization. The bad - Items can feel generic after a while. Difficulty ramps up quickly. The ugly - It's still in early access.
Genre Physics-based space mining/exploration sim Platform Steam/PC
I'm a sucker for space mining games. I haven't played a lot because I'm still searching for the perfect one. Rings of Saturn comes close. Umm, ignore the Δ (pronounced "Delta") because it's a weird character that has no place in the English language outside of a lab or a physics lesson.
But there's the rub - the physics aspect. Rings of Saturn is heavily physics based. There's mass, momentum, inertia, acceleration, and a lot more of that sciency mumbo-jumbo. Fret not - it's presented in an easily digestible manner. Oh and this is a 2D game, which makes it all the more interesting to me. Screw that 3rd dimension, it's only there to cause trouble.
There's a short but concise tutorial when you get thrown into a new game, and it serves well to explain how to control your ship. Thankfully the starter ship is very maneuverable, though small, but this is exactly what you want initially. Eventually you'll get comfortable enough with the controls that you'll want to spring for bigger, but more sluggish ships.
The point of the game is to mine rare minerals in the rings of Saturn (ya know, that cool planet in our Solar system). You accomplish this in various ways, but the starting ship has only a mass driver (fancy term for a railgun) and an excavator-like scoop. You fly around and maneuver using your thrusters until you find a promising rock (read "big enough"), then you aim and blast it with your mass driver. After one or more shots the rock will shatter and sometimes reveal a mineral nugget. Because a mass driver fires a projectile, there's recoil, and the mineral kernel might also spin away from you at increasing velocity. Hence the delta-V (the difference in velocity between bodies - I hope I'm not butchering the concept).
So now you're faced with a hard decision: should you power up your engines to full burn and chase after the nugget, risking smashing into other rocks, or just let it go and find one that sticks around after being cracked open? Choices, choices. Oh, btw, smashing into hard objects in this game can quickly evolve from bad to fatal. Ships have many modules that will get damaged by impacts, so you want to be extra careful when navigating. A common disaster scenario as a noob (or even later with bad luck) is losing maneuvering and braking thrusters and spinning out of control. Of course, the deeper you go into Saturn's rings (you start at the outer edges) the denser the floaty rocks, so you want to get those controls down to a science before you blunder your way in deeper.
Don't worry though. Once your cargo hold is full you can head back to the station for a repair and a refuel, and to sell your load. Early on, it only takes a couple of trips before you can start upgrading equipment. Eventually you'll have access to mining lasers, microwaves, mechanical arms, automatic collectors, sophisticated radar, powerful engines, and of course big ships. There are even floating refineries that are sluggish as hell but can carry and refine huge loads.
One piece of essential equipment that you'll want as soon as you can afford it is the scanner, combined with a geologist crew member. This will identify each mineral chunk as it's ejected from its rocky cocoon, showing a readout of the chemical symbol and the value. The chief benefit is that it will allow you to focus on the most valuable nuggets, while ignoring cheap ones.
You can (and should) hire crew for your ship. Crew members must be paid a regular salary, and skilled ones don't come cheap, but they'll make a huge difference in how a ship operates, and ultimately will greatly improve your mining yield. As I mentioned above, a geologist will earn their salary on the first trip.
There's more to the game than just mining. I haven't got very far, but there are various stations that you'll discover, pirates and odd characters you'll run into, and space races you can participate in if you feel inclined (just don't bring your refinery ship to the race).
If I'm not mistaken, this is one of those indie games that was made by a single person, or at least a tiny team. Some of that is reflected in the grittiness, but it's also why this game is so amazing and special.
If you like space (mining) sims, this game is a must, especially for the measly $10 it costs. Oh and it also runs on a Mac, so there.
The good - Very unique take on the space mining genre. Solid physics engine that will literally throw you for a loop. Wide variety of ships and mining gear. Different ways to enjoy it (miner, pirate, racer, etc). Vast (though repetitive, but it's space duh) area to explore. The bad - A lot of things aren't explained very well beyond the initial brief tutorial, leaving the player to figure it out (though some people like this). The ugly - Time consuming if you want to discover all it has to offer.
Genre Deck-builder roguelike Platform Nintendo/Switch
You must have been living under a rock if you haven't heard of Slay the Spire. It's been quite a sensation over the last couple of years. I'll admit that I'm not big into deck-builder games, and this is actually my first one.
I spent quite a significant time playing it on the Switch, and I still haven't managed to get very far. This is a game of skill and strategy, that will take hundreds or thousands of attempts to master.
The controls work great on the Switch and the graphics are perfect for it, so it's a great little complex and portable game. You can easily kill a 10 hour flight engrossed in it.
Anyway, I won't go into much detail describing Slay the Spire because a lot of ink has already been spilled over it. Suffice to say that if you're into deck-builders, this one's a must.
The good - Seriously fun and engaging gameplay. Cute art. Solid controls. The bad - 2 of the 4 character classes are not that appealing to me (but that's because I suck). The ugly - I'm not that good at it and have barely made it halfway through the campaign.
Genre Retro-inspired retro strategy roguelike autobattler Platform Steam/Mac & Nintendo/Switch
This game is honestly hard to put in any box, as I've never quite played anything like it. Hence the weird genre amalgamation I made up. It has a little bit from each, with a sliver of idle clicker thrown in.
You start with a hero of a particular class (more classes unlocked as you progress) who moves around a randomly generated circular track. Monsters spawn periodically; every time your hero encounters them there's quick auto-combat. If you die, it's game over. But that happens later. In the early stages your hero won't have much trouble quickly vanquishing any foe they encounter, especially since monsters drop loot that you can equip to enhance your fighting and survival abilities.
Every time you complete a loop, monster levels go up but so does the loot. So next time around you'll get a chance to upgrade your gear when something better drops. There's a day cycle as well, causing monsters to respawn.
Monsters also drop two other kinds of resources. There are materials that you use for meta progression in between runs (after you ded) to expand your base, hence giving you various bonuses. Then there's the meat-and-potatoes of this game: building and terrain cards that go into a deck and can be placed on the map strategically. Beware though: older cards that overflow the deck will be automatically discarded, so you should use the best ones soon.
The terrain cards fall in several categories.
For starters, there are map tiles that will give you resources when you place them on your map. When similar tiles are adjacent, they reinforce each other, giving you survivability bonuses (like more HP, or regen after each loop). At the same time, if a "mini-biome" becomes too large it will spawn additional enemy types. For example the mountain biome will spawn harpies when it reaches a certain size.
Then there are enemy structures that you can place either on the path itself, or next to it. They will spawn new enemies roughly every day cycle. Juggling how many buildings you add before you are overwhelmed with spawns is a key part of the strategy.
Finally there are buff tiles that can be placed near the loop path. You'll get specific buffs while traveling in their radius. So you might want to use these buffs strategically around bigger pockets of enemies or enemy structures.
There are several hero classes themed around the standard melee, rogue, magic, etc. One of my favorite is the necromancer who auto-summons skeletons to fight for them. With this class your equipment is focused around skeleton survivability and damage. With other classes you can play defensive, offensive, vampiric, or evasion builds. Evasion is particularly satisfying, as you will take very few hits when you have a ton of evasion, with the downside that your defense will suffer hence making even small hits feel painful.
The end goal is to survive enough loops in order to trigger an arch-boss. There's probably more to it, but that's all I can say as I haven't got to that part yet.
The good - Unique gameplay. Can be left to run in the background and do its thing, with sporadic interaction to place tiles or equip better gear. This reviewer finds the retro art very pleasing. The bad - Long grind to beat it. The ugly - There doesn't seem to be an actual endgame, just attempting increasingly harder loops.
Genre Story-driven adventure/fishing/shop management sim Platform Steam/Mac
Dave the Diver was one of the most highly regarded games of 2023, and for good reason. Starting with the pixelated but cute and intricate graphics, following up with the likable characters, and rounding off with a simple but solid game loop, Dave can easily swallow hours of your time. It has strong East Asian vibes (I believe it's made by a Korean team), but the NPCs are multinational, each with its own quirky lovable personality.
I played my copy on a Mac with a wireless gamepad, but it runs just as well on the Switch.
You play the role of Dave, a meek fellow who finds himself in an exotic lagoon with a deep trench and a sushi restaurant nearby. Right off the bat, Dave is sucked into helping the local community.
There are two main game modes. During the day, Dave will dive in the trench (or hole as it's called in the game) and catch fish using various harpoons and other tools. At night, Dave helps to run the sushi restaurant by serving the customers with dishes cooked from the fish he caught previously.
I'm not much into shop management sims, but Dave's restaurant mode captured me right away. The gamepad controls are very intuitive and I get a kick from doing my best to serve customers efficiently. At the end of the shift there's a satisfying tally screen that shows what profits you've made.
The earnings are used for a variety of upgrades, including fishing weapons and gear, improvements to the restaurant (furniture and decorations that will bring in more customers), and hiring helpers (thus serving more customers per shift).
Fish are used for various dishes that initially sell for a couple of bucks. You can use excess fish to upgrade a dish, but you'll also discover more expensive dishes as you progress. Ingredient requirements will also increase. Eventually, elaborate dishes will be selling for hundreds of dollars.
The fishing game is equally satisfying. You are limited by your equipment (chiefly the air supply) in how far down you can swim. Depth also affects how quickly your air depletes. However this can be increased by upgrading your equipment. Most fish are passive (but they'll run away if you don't one-shot them), but there are plenty of aggressive ones too. Sharks anyone? There are several types of shark btw. You catch fish by aiming your speargun and pulling the trigger. If you hook the fish you can reel it in. It takes some practice but the learning curve is shallow. Later you'll evolve to sniper rifles and grenade launchers among other things.
In the water you'll find various powerups, materials, and ingredients in chests and on the ocean floor.
I would probably be happy enough with the two modes, but there's much more. For one thing, there's the over-arching narrative. It's a pretty compelling story, that involves some mystery which you'll work towards uncovering. You will, of course, go much deeper. There are boss sea monsters that you'll fight. There are at least 4-5 minigames that you can play if you get bored. There are 3 types of farms that you can manage later on. And Dave gets a smartphone with a lot of apps used to buy and upgrade equipment, manage various systems remotely, play some of the minigames, communicate with the NPCs, and so on. There's even an encyclopedia of all the discovered fish.
Dave the Diver is a lot of fun. I moved on to other things before finishing it. Without spoilers, I wasn't exactly fond of the later story, and the boss battles were getting more and more stressful, requiring careful timing and multiple deaths before succeeding.
The good - Very enjoyable, good vibes, A+ art and graphics. The main fishing + restaurant management gameplay loop is very solid and satisfying. The bad - There's a bunch of stuff (like most of the minigames) that could have been left out. I don't think they contribute much to the game, and they're mostly just fluff. The ugly - Boss battles got too hard for the amount of effort I was willing to put in. I empathized a lot more with the initial set of characters, and the second part of the story left me disconnected.
Genre Physics-based 3rd person 3D space salvage Platform Steam/PC
People like breaking things. That's why there's a plethora of disassembly-themed games. After all, it's a lot easier to take something apart than it is to put it together. This is what Hardspace Shipbreaker is - a scratch-your-itch game for the disassembly crowd, in space.
You are a shipbreaker, a space dock worker tasked with breaking old ships apart for recycling. When the story begins you are in crippling debt to the corporation for which you're slaving.
The first 3rd or so of the game is a massive tutorial that doesn't feel tedious like most game tutorials. It does a great job to gradually introduce you to movement and controls, the various tools, and the increasingly complex ship systems and hazards that you have to contend with.
Each ship floats in space in proximity to 3 recycling areas: a systems barge, a furnace, and a materials recycler. You must put each of the 3 types of parts in the corresponding area to get proper credit. If you put a part in the wrong place you'll be penalized instead. You do that with a grappler tool that uses a sort of traction beam to grab hold of a part, then you drag it to where you want it to go while imparting momentum, and release. The physics (zero gravity and atmosphere) will take care of the rest. It's a very satisfying process and it works with heavy items as well, but beware: the heavier the item the more momentum it has, so you need to account for that.
While the grappler can also be used to get around by grappling onto a distant surface and reeling it in, there are other tools in your arsenal. Chiefly among them is the laser cutter. This tool has several modes. It can heat a spot, or can switch to a horizontal or vertical slicer. Use your judgement to determine which. Cutting near hazardous areas such as cryo or fuel tanks can be fatal if you're not careful.
The tether tool deserves special mention. This is a consumable "energy thread" that you attach one end of to a salvage item, and the other end to a recycling area intake. The tether will then pull the item in that direction, freeing you to do other things. The cool thing is that you can attach multiple tethers for extra pulling force, or you can chain multiple items together, or you can tie items to each other, or you can tie items to buttresses in the dock to peel off large sections of a ship while preventing it from drifting in the wrong direction. Tethers are paid consumables, and the max you can carry is low at the beginning, but you can upgrade the capacity later. It's an incredibly versatile tool that can be used strategically in many scenarios.
Your EVA suit has a jetpack (using fuel) and an air supply. If you run out of either you'll get stranded, or die. You can replenish both (as well as other consumables) at a vending booth outside the main station airlock. Due to the nature of the credit/debit system, and perhaps because you start in massive debt, it doesn't really matter how much the consumables cost. It doesn't really matter if you die either. A clone is ready to replace you, at a cost. It only plunges you farther into debt, but that won't matter at later stages of the game.
Just about the only downside of dying is that your shift is cut short and you wake up back in your cabin. Btw, you work in 15-minute shifts, so the idea is to recycle enough materials before the shift ends to turn a profit. Again, the game is laid back enough that this isn't an issue. Between shifts you'll spend some time in your cabin. It's a cozy little space that you can customize to a small extent with posters that you find on ships. It's also where you can buy upgrades (not with money but with XP that you gain by hitting salvage milestones), and where some of the narrative unfolds.
Speaking of narrative, a compelling story is weaved into the game loop. There's a series of NPCs that you will interact with, namely your coworkers, superiors, and corporation executives. Without spoiling anything, it involves drama, some tense moments, sabotage, union action, but ultimately a happy ending. The characters are relatable and likable. They chime in as you progress, as portraits that you can't interact with. The voice acting is top-notch and chisels the personality of every NPC to perfection. It made me actively root for the downtrodden and hate the bastards.
This is one of just two games I finished in 2023, partly because I played it obsessively for a couple of weeks, partly because the progression is brisk, and partly because there's not a ton of content. I played about 43 hours, but I'm sure a second playthrough would be quicker since I'm so familiar with it.
The good - Performant 3D engine, slick UI. Satisfying mechanics aided by solid-feeling tools. Controls are easy to master. Tangible rewards and visible progress. The tether system is brilliant; I want that in real life. Great voice acting, engaging story, and relatable characters. The bad - It will become obvious that ships are put together in predictable ways, even if increasingly complex, removing some of the joy of discovery. Death is not a huge deal, making you careless and sloppy after a while. The ugly - There's no endgame, per se. Sure, you can continue disassembling ships, but at this point you've already gone through the highest level ones, so it's just more of the same.
Genre Eldritch adventure, sea exploration, and fishing sim Platform Steam/PC
I've had this game on my radar for a while, and bought it during a Steam sale. It works well with keyboard and mouse, but it's also available on the Switch. The controls are simple enough, though.
Dredge is a narrative-driven fishing and exploration game with a side of occult and eldritch terrors thrown in. I love the sleepy but terror-under-the-surface atmosphere that starts wearing on your psyche during the first few day-night cycles.
You play as a little boat (captain) who finds themselves in a chain of tiny islands in a vast sea. Some of these islands have tiny hamlets populated with the NPCs who drive the story. The hamlets usually have additional facilities like vendors and a shipyard. You'll also encounter solitary NPCs tucked away on various islands.
The main game loop involves fishing. Putt-putt your little boat until you find a fishing spot (indicated by a bubbly pool) and cast your rod. There's a minigame to reel the fish in, combined with some inventory Tetris. Your boat has limited inventory in the form of an irregular-shaped grid. Ideally you'll want to fill in the hold entirely before returning back to town. Fish and other items can be rotated for optimum use of available space. The bigger the fish, the more space it'll take. Fish are also irregularly-shaped, so this is an additional challenge. Boat equipment such as reels, engines, nets, etc also take up available space. Later you'll upgrade the space but it won't ever feel enough.
Once you have enough fish, return to town. Typically this will coincide with nightfall, when you definitely don't want to be caught out at sea. This is when the night terrors emerge. I won't spoil much, but it involves hallucinations and half-materialized apparitions and can be mildly unsettling. Unlike real life, nightmares can kill you in this game. So do your best to be back in town before darkness falls, or be prepared to dodge phantoms and run for your life.
As you talk to the townspeople the narrative unfolds. You can follow the main story but there are various side quests that are lucrative, as they'll provide you with useful items, equipment, and extra materials.
You'll use money and materials to upgrade your ship as you go along. This is a very important progression mechanism, as it allows you to fish in more advanced spots (for example there are oceanic, shallow, swamp, volcanic, etc spots that each require the appropriate type of rod), move faster, and adds the ability to dredge, among other things. Dredging (apropos the game title) is done in a special bubbly spot that you'll encounter as randomly as a fishing spot, and it involves yet another type of reeling minigame. Instead of fish you'll pull out materials (used for upgrades) and items (either for quests or that you can sell).
I enjoyed the fish-explore-upgrade-venture loop quite a bit. I wish the developers leaned even more into it, but ultimately this is a grand adventure wrapped around a satisfying resource gathering experience.
The game imparts a sense of ominous quiet. Peaceful at the surface, there's an impending hint of something lurking underneath, or at the very edges of your perception. The uneasiness grows as you untangle more of the story. At the same time, the nighttime feeling of helplessness diminishes somewhat as you upgrade your ship and your character, as this confers various pieces of equipment and skills that reduce the eldritch encounter risk.
The quests involve finding various items (through fishing or exploring), discovering new areas and NPCs, and solving various puzzles. The puzzles aren't especially tricky. While some are clever, making me feel like a real Sherlock when I figured them out, others are less obvious causing me to search for a solution online. Granted, I was also pressed for time as I needed to travel soon and I wanted to get through as much of the game as possible.
And that I did. To my surprise, I finished it in just over 16h. It ended quite abruptly after an incongruous dialogue with the main questgiver. A message warned there was no going back after that point, but I assumed it might be some sort of final boss battle. Instead, it played a cutscene and then cue the credits. It kinda felt like all my efforts thus far had been in vain.
Dredge was fun, though I strongly feel they should have allowed free play after the ending. I don't regret the purchase, but I would think twice before paying full price. Thankfully it goes on sale quite often on Steam and Switch.
The good - Tense, suspenseful atmosphere. Gripping story (minus the ending). Cute cartoony 3D art. Pretty good fishing & exploration experience. The bad - Some of the puzzles weren't forthcoming, making me search online for solutions (I'm not a fan of puzzles and brain teasers in games). The ugly - Too short. Very abrupt and unsatisfying ending. It should give you an option to continue playing.
Roboquest (Steam/PC) - a fast 3D FPS roguelike shooter. You're fighting hordes of robots across a series of biomes divided into smaller areas. Each biome culminates in a big bad boss that you have to defeat in order to progress. The main highlight is the random wacky guns that drop from events. You can carry only 3 at a time so you need to discard one when you find something better. You also get powerups that give permanent boosts to stats and abilities for the current run. Dying respawns you back at your camp where you can take advantage of the meta progression to improve skills permanently. There are several player classes as well, though none of them feel particularly appealing or special. While the game has very cute cel-shaded graphics (that I dig a lot), runs very smoothly, and has very satisfying combat, ultimately I gave up on it because I felt the combat difficulty ramped up too quickly. After endless attempts I was barely able to beat the first boss twice, only to get clobbered hard in the next biome.
Stacklands (Steam/PC/Mac) - roguelike deck builder strategy with a twist. This is another game that is hard to pin to a specific genre. The overall goal is to build a settlement with cards, but also expand it, and survive enemy attacks. You start with a random starter card pack containing an assortment of types. You then stack cards on top of each others to perform an action. For example you would stack a villager card on top of a berry bush card and the villager will start collecting berries (for food). Every day you need to feed your villagers. There are money cards that drop, but you can also sell a lot of the items you collect. Use the money to buy more packs. Get more villagers (I forget how). Stacklands has very original gameplay, but I stopped early because I am too dumb to make any sort of meaningful progress. For that reason I can't give any more details.
Strange Horticulture (Steam/PC) - dark mystery story-driven adventure shop management. I'll admit that I was deceived by the premise of this game. I thought there would be heavy shop management involved, but instead I discovered that it's mostly a linear story wrapped around inheriting a herbalist store where the main tasks revolve around identifying new plants and providing them to a rotating cast of NPC visitors, each with their own problems and dark secrets. There are various puzzles to solve which weren't very demanding. Ultimately I abandoned it because I get bored with story-heavy games, and the plant identifying aspect got tedious after a while. Good art though, and soothing music. Overall not a bad game to chill to.
Chillquarium (Steam/PC/Mac) - idle clicker aquarium management. A game with very cartoony pixelated art, you start with a basic aquarium and a handful of boring juvenile fish. You feed them by clicking in their vicinity, which drops fish pebbles (food). After a while they become adult, revealing their "true colors". You can sell these fish to buy fish card packs that can contain higher qualities (blue, yellow, purple, orange), kinda like items in an RPG game. The higher the fish tier, the longer it takes to evolve. Rarity is also determined by the color of the fish itself (gold being the highest). Raise them to adulthood, sell the ones you don't want to keep, rinse and repeat. The ultimate goal can be whatever you want, but in general it's collecting the rarest fish. There's a fish ledger that is fun to fill up as you collect 'em all. It's a pretty chill game for a pretty low price. It doesn't have a lot of replayability but it can be helpful for relaxing, if that's your thing.
My Game of the Year for 2023 is undoubtedly Last Epoch. I'm a sucker for ARPGs, and I played LE the most in 2023 (and rather obsessively). After Diablo 3 and 4 I became quite jaded about ARPGs, but Last Epoch revived that passion. It helped scratch that itch, and I would absolutely play it again.
]]>I thought I would use the Migration Assistant to move all my stuff over. I was quite surprised by how well it worked. Here are some thoughts and insights.
The Migration Assistant can be used during the onboarding process as soon you power up the new machine. The "donor" needs to be powered on and on the same network. I selected everything to be moved over.
I was a little uneasy when it gave me a 13 hour estimate. In the end it took 1 hour. Keep in mind that the old machine had 1 TB while the new one has only 512 GB. The total data moved was somewhere north of 200 GB and I was left with 118 GB free. If you think that's low, it's because I have a lot of duplicate stuff that will be cleaned up soon.
I've setup a fair share of Macs over the years, but I don't recall using the Migration Assistant before. Surprisingly, it did an amazing job.
It literally moved over all of my stuff.
Not only did my browsers still have the same tabs open, but I was logged into all the websites from the old machine.
My documents were all there, including the ones not in iCloud.
Apps, likewise, were all there and worked without a hitch. Some exceptions apply, read below.
The craziest thing is that the movie I had started watching in VLC on the old machine resumed in the same spot on the new Mac.
Best of all, the coding tools were in place (again, with some exceptions): Laravel Herd, DBngin, TablePlus being the most important.
My ssh keys were in place, and I was able to ssh to my remote server without a hitch. I also connected to the remote database (with the same ssh key) without issues.
PHP worked just fine at the command line, and my side projects ran in the browser the same way I left them. composer was fine.
git worked as before.
Some of the apps were made for x86 and required Rosetta on the M3. I opted to uninstall them, and then downloaded the ARM version. Unfortunately certain apps are still missing an Apple Silicon build, thus requiring Rosetta (looking at you Garmin Express and Steam).
A few more that I had to replace with Apple Silicon versions are: 1Password, and JetBrains Toolbox.
At the command line (I use Warp for my terminal) I had a couple of issues. Bear in mind that all of these are related to migrating from an Intel machine to an Apple Silicon machine, and not really caused by the Assistant.
My Node version wasn't compatible with the new machine. It gave a zsh: bad CPU type in executable: node
error. This was fixed quickly by installing the latest version with Herd.
Homebrew gave a Bad CPU type in executable
error when I ran it. I uninstalled it, then reinstalled it. In my .zprofile
I replaced eval $(/usr/local/bin/brew shellenv)
with eval "$(/opt/homebrew/bin/brew shellenv)"
.
bat (replacement for cat) gave a zsh: bad CPU type in executable: bat
error. Fixed by running brew install bat
.
ncdu (replacement for du) gave a zsh: bad CPU type in executable: ncdu
error. Fixed by running brew reinstall ncdu
.
rustup gave a zsh: bad CPU type in executable: rustup
error. I fixed Rust by running the official install script curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
.
cargo and rustc continued to give this error even after updating Rust error: command failed: 'cargo/rustc': Bad CPU type in executable (os error 86)
. I should have done a fresh Rust install. So I uninstalled everything with rustup self uninstall
, then ran the installation command again, and all 3 were working.
I ran into some problems with npm run <any-script>
in a project.
npm run dev
[webpack-cli] Error: spawn Unknown system error -86
This was fixed by installing Rosetta. I had to do that in any case since a few programs refused to run without it, but I wanted to leave it last.
softwareupdate --install-rosetta
Apple's Migration Assistant has done a bang-up job in my opinion. Not everybody likes to copy over old settings and apps, but I'm lazy and I don't have any pleasure in setting up a machine from scratch. Besides, the old machine had been setup fresh only a year ago, so it's unlikely that a lot of bitrot had set in.
The only issues that gave me some headache were entirely related to changing architectures from x86 to ARM.
If you're in a similar situation (or just lazy like me), I highly recommend it.
I'll keep an eye out for issues over the next period and update this post accordingly.
]]>Here are some of the design decisions and highlights.
I came close to deleting the site at the end of 2023. Looking back, I'm glad I didn't. The refresh has given me a new desire to blog (at least for the time being).
I went into this with two overarching themes in mind.
First, I wanted v2 to be more content-focused starting with the landing page. Previously, the site index had a lot of yada-yada about me. I moved all of that to separate sections under about. It includes work experience, hobbies, and side projects.
While initially I saw it as a personal/portfolio site with a blog attached, over time I realized that the landing page had remained frozen in time. Truth be told, I don't think anyone benefited from reading a long list of my personal stuff whenever they loaded the site, especially since that content was mostly static.
Now, the landing page is "alive" with featured, recent, and recently updated posts. There's also a tag cloud because I like tag clouds and it reminds me a little of the old web.
Second, I wanted to simplify the design and give it a slight utilitarian/brutalist vibe. I was also toying with the idea of going the opposite way by making it very colorful, but in the end I settled on what you see currently. For a first pass I think it looks great, though it's only 2/10 brutal if you ask me.
I touched on this here, so I won't rehash it. Basically I upgraded Jigsaw to the highest version possible, though not the latest because I'm constrained by Netlify's outdated infrastructure.
I came across Modern Font Stacks and immediately embraced the concept. Using native fonts is a great way to improve site performance. I gave up on the initial idea of using fancy fonts and self-hosting them.
Here's how I changed my fonts. I think they look a lot better and load instantly on any platform, without jank.
Lato
-> System UI stack font-family: system-ui, sans-serif;
Bitter
-> Transitional stack font-family: Charter, 'Bitstream Charter', 'Sitka Text', Cambria, serif;
Consolas
-> Monospace stack (very similar) font-family: ui-monospace, 'Cascadia Code', 'Source Code Pro', Menlo, Consolas, 'DejaVu Sans Mono', monospace;
I also changed the blogpost font size from 20px to 16px. I've become partial to small text lately.
I narrowed the main content column from ~110 characters to ~80 characters. This makes it easier to scan a line of text and is just over the recommended 75 max line length.
My JS bundle stayed the same. At 231 KB I think it is too high, but this is coming mainly from Jigsaw and a couple of Vue components. I should really refactor these to Alpine or similar but I don't have the willpower right now.
The CSS bundle dropped slightly from 34.7 KB to 31.3 KB. Not a lot, but better than nothing. I can do more optimizations here for sure. For one thing, I think I have one too many breakpoints (both sm:
and md:
). For another, I would like to do a color pass at some point and restrict the color palette (currently teal) to 2-3 theme shades instead of 3-4.
Lighthouse scores are 100 for performance and almost 100 for the other categories. There are certainly a few improvements I could make here as well.
To me, putting an item in a category has always felt the equivalent of putting it in a folder, in other words it can't be in more than one category at the same time. In a default Jigsaw installation, categories act like tags.
I like tags because an item can have more than one tag, so it made sense to rename categories to tags.
I also changed tag names from camel case to lowercase or kebab case for multi-word tags. Examples:
/blog/categories/Laravel
-> /blog/tags/laravel
/blog/categories/MySQL
-> /blog/tags/mysql
/blog/categories/DevTools
-> /blog/tags/dev-tools
I think I might also rename the general
tag to random
.
There are, of course, many layout improvements on every page, particularly on the landing page, the blog index, blog post, post archive and about section.
One of my redesign goals was to change the color palette from the standard Tailwind teal
to something else. This kept me going back and forth for days, without a clear result. Teal is rather cold and sterile, and I wanted something more vibrant - a red, purple, or yellow/orange. Because this was holding me back, I decided to leave it as is for now, and continue exploring options after I launch v2.
I am very happy with how the refresh turned out. It only took one month of late-night work sessions, and I launched it unceremoniously on February 1st. Truthfully, I wasn't expecting to complete it so soon.
The main benefit is that a clean, fresh look motivates me to post more frequently. It remains to be seen how long this newfound enthusiasm will hold, but I've already collected a bunch of ideas on various developer-adjacent topics that I would like to post.
There are many things that I'd like to add and improve, however.
Apart from posting, I'd like to add a blogroll section for some of my favorite tech blogs. Then I think it would be cool to have games and books sections where I briefly mention my favorite games and books from each year. These last two are shaping out to be a lot of work, so I can't promise they will happen.
Finally, I would like to implement a GitHub-based commenting system with utterances. Why GitHub? Three reasons: 1) it works well with static websites (and is free to boot), 2) the comments remain under the blog owner's control (unlike 3rd party systems like Disqus), 3) I want to limit commenting to developers (GitHub integration also takes care of authentication).
To conclude, I had fun redesigning the site, and I deem the refresh a big success. Let me know how you feel in the socials and I'll catch you in the next blogpost.
]]>You might consider using SQLite in your next project. My reasons for using it have multiplied over time. Here are some of them.
A database contained in a flat file is lightweight and easy to manage.
Portable and easy to backup. It's a flat file, just copy-paste it.
Performance is on par with "traditional" databases like MySQL. It can also handle millions (billions?) of records easily. I picked these up from various articles, but I don't have any concrete citations.
Over the past 1-2 years there's been a lot of chatter on the interwebs about various companies building successful products on SQLite. I'm seeing more of this as time goes by and people realize that it's a serious contender.
Easy to embed in a desktop app made with Tauri or Electron.
Taylor Otwell has been talking about making SQLite the default connection in Laravel 11. This is yet to be confirmed, and it might not even happen, but just the fact that it's being considered is a strong indicator that the community is leaning into it.
Aaron Francis has talked on a few podcasts about building a static site generator with Laravel and SQLite. The details intrigue me a lot and I'm very excited to see what he'll build. It sounds a lot like my use-case.
Here's a great article that explains the strengths (and some weaknesses) of SQLite compared to "traditional" databases like MySQL and PostgreSQL.
I've already built a small/experimental/throwaway(?) project on SQLite in 2023 and I really like it. In 2024 I would like to start working on a new idea, and it will absolutely be on SQLite.
]]>I've been struggling with this concept for a long time. I try to be positive online, whether I post my own thoughts, or reply to someone else. However, occasionally criticism might be warranted, yet I hesitate to reply in any way that might be construed as negative. Am I wrong here?
Three recent scenarios come to mind.
Popular framework about to receive a major version advertises a new feature that I personally think might be a regression in relation to onboarding new users, and even for seasoned users. I've been tempted to say something about why I think the headliner feature isn't that great but I realize I would be buried by the supporters of said framework (of which I am one!). To be clear, I am supremely grateful that this framework exists, yet even the best things can (and should) be criticized occasionally.
Another popular framework also about to receive a major version introduces a new API that doesn't seem that great to me, although it is lauded by the expert users of the framework. Once again, I hesitate to express my concerns. But perhaps I am wise to keep my mouth shut - for one thing, I'm not an expert at it (just a very happy user); for another, I might find the final incarnation very pleasant to work with (in which case any misgivings would have been unfounded). I swallow my criticism.
A popular developer who maintains a very popular framework changed their avatar and asked what people thought. Everyone complimented them. To me, the new avatar makes them completely invisible among a myriad others, chiefly because there's very little contrast between the person's face and the background. At a casual glance, the person blends into the background. The old one had a contrasting background which made it instantly recognizable, at least to my eyes. I started to reply that I'm not digging it, but I felt like a minnow swimming against a roaring flood, so I deleted my post. Don't get me wrong - the photo of the developer is great in both versions, it's just that the new avatar looks very anonymous.
There you go. Sometimes you have thoughts and criticism that go contrary to popular opinion. Should you voice those concerns in a public forum or keep them to yourself? I guess I chose the third option: write about them in an obfuscated way on my own blog.
]]>I've been operating this blog since 2019 but I wish I had started one much sooner in my career. Nevertheless, it's 4+ years old now and I've managed to post several times a year on various developer-adjacent topics, though infrequently.
2023 has been the worst year for growing my audience, for several reasons –
Twitter's death brought a lack of desire and motivation to post anything constructive. It meant no more tweeting carefully-crafted tips or high-quality engagements. As a side-effect, there's less incentive to write blog posts as a way to grow an audience (though I still write when I feel like it). Why would Twitter be the catalyst for my blogging? Simply because it used to be an amazing conduit to audience-building before the unfortunate change of ownership.
Moving to Mastodon was cool - I think everyone should embrace it at this point - but it came with its own downsides. First, the community is a lot smaller and less focused on dev topics. I use social media strictly for the developer community (with a focus on Laravel, PHP, Svelte, JS-adjacent). Second, I barely receive any reactions to my posts (and yes, I still cross-post on Xitter sometimes). Third, I follow a lot of devs but people on Mastodon seem to be less filtered so they post a variety of things that have nothing to do with software development. This creates a lot of noise that is hard to filter out. End result is that I don't have much desire to post or interact there either.
Joining BlueSky took a while but that place feels like an empty tomb. I can almost hear the wind blowing through the empty hallways when I open the app. At least the devs I follow are pretty focused.
I felt burnt-out on side projects. Having a whole bunch of WIP projects can get to you. At least I managed to fully release a 1.0 product in 2023, but for the most part I'm continuously iterating on multiple projects with no end in sight.
Writing blog articles can be a full-time job. I get article ideas quite frequently, but there's a long way from idea to a finished blog post. Depending on complexity (are there code snippets? - make sure the code is formatted properly and it works! pictures? - process, crop, resize, add annotations, make thumbnails, etc!) it can take many hours to publish a complete piece. Hours that I would rather use on a side project.
Losing analytics. In 2023 Google Analytics switched to a new version, which made my existing setup redundant. Good riddance. On the other hand I don't have any intelligence on whether people visit this blog or what articles are popular. It makes me feel like I am talking to the void. In 2024 I hope to add a more basic form of analytics with better privacy.
No comments. One of the best ways to track engagement and feel motivated is to have a commenting system. This blog hasn't had one, with the exception of Mastodon webmentions which apparently have privacy issues in the form of a lack of consent (if someone replies to your Mastodon thread, the reply appears automatically on your blog without their consent). In 2024 I hope to finally implement GitHub comments with utterances but I've been saying that for a long time so who knows if I'll ever get to it.
Blog engine became outdated. This blog is built on Jigsaw, a static site generator based on Laravel. I neglected to update it for a very long time and it became increasingly harder to implement new features or even deploy it properly due to failing builds and what-not.
I remembered the original mission of this blog - to keep track of interesting dev topics, to keep a record of various techniques, and to share solutions to vexing problems I solved.
After the new year I got over some of the angst and started looking at this problem with new eyes. I decided that, instead of purging all the work that went into it, I could turn it into a fun side project by upgrading the blog engine and refreshing the design.
As you're reading this, I've already accomplished the first part. The blog is now running on Jigsaw 1.3.45. Yet this is not the latest version of Jigsaw (1.7.1). Why? Because it's deployed on Netlify which supports a maximum build version of Ubuntu 20.04, which in turn is limited to PHP 8.1. Jigsaw 1.3.45 is the highest version that runs on PHP 8.1, with higher versions requiring 8.2 (but those don't run on Ubuntu 20.04).
So at this point I am basically held back by Netlify's lack of support for an Ubuntu image with PHP 8.2. Quite disappointing for such a highly regarded service, but thankfully it's not the end of the world. Even this intermediary version of Jigsaw is a lot better than what I was running before, chiefly for two reasons –
First, it uses Tailwind 3.x. This will make styling more flexible and convenient. Second, the front-end build process is streamlined and easier to use.
Nothing has changed at this point in the site design except for the fact that the updated Tailwind colors are now either more saturated, or slightly darker than before. I'm fine with this. The other thing is that the library behind the search is updated and it feels to me that it returns more relevant results.
I will make no secret of the fact that I am getting inspiration from a lot of different developer blogs. I've been bookmarking awesome blog designs for years and now's a good time to borrow some of those great ideas.
There's a lot more that I would like to achieve, but I'll take it a step at a time. Since I don't have a clear vision for the end-result, I'll go by feel and experiment until I land on something that I like.
Stay tuned!
]]>I continued this in 2020, 2021, 2022, and now, 2023.
🔼 💻 Laravel 🔼 🏃 Running 🔼 🏊♂️ Swimming 🔼 🕹️ Gaming 🔼 📚 Reading
🔽 🛠️ Side projects 🔽 🚴♂️ Cycling
As usual, I won't share any personal stuff here.
Thankfully I've been heathy, barring various minor physical sport-related injuries.
In 2023 I changed jobs. I am still full-stack and full-time remote, but in a small SaaS startup this time. It's a great place to be at, and I am happy to be working professionally with Laravel once again.
The blog has now been around for 5 years. As time goes by, the motivation to write diminishes. Or rather, I would like to write more, and sometimes I create article outlines in my head, but when it comes down to actually doing it, I always find better (read "easier") things to do.
In terms of traffic, I decided to check Google Analytics for the first time in years. It turns out that tracking is broken because I haven't implemented GA4 😆 - so broken it shall remain.
Since I'm no longer tracking I don't know which of my articles people liked in 2023, but here are a few that I think are pretty decent:
In 2023 I worked less on side projects, especially as the year progressed and I became more involved at work.
Here are some of the personal projects I worked on this year:
I described my 2023 stack at the beginning of the year and nothing has changed, except for the fact that I decided to make my own blog engine.
The only new thing that intrigues me this year is HTMX. I wish it existed back in the days of jQuery. Today, however, it's a bit redundant with a Laravel back-end, since Livewire is the perfect equivalent.
The only new, "big-budget" gadgets I bought this year were both sports-related. A bike computer - Garmin Edge 840, and a watch - Garmin Forerunner 965. They are both great devices, but the watch is absolutely amazing. It's a bit of a downgrade from a Fenix in terms of durability, but everything else is leagues ahead, and it has helped me take my fitness to new heights.
Speaking of fitness, 2023 saw my activities shifting towards running, and slightly away from cycling. I still ride quite a lot, but running has definitely been the headliner this year.
In 2023 I rode only 5100 miles (8200 km) vs 6400 miles last year.
On the other hand, running has become my top fitness activity (at least in terms of enjoyment). It's very ironic since I always dunked on running as being one of the most boring activities in addition to wrecking your knees and joints.
Well, I stuck with it for short distances at first, moving on to longer and longer runs. After a while I really began to enjoy it. I learned a few things that helped me progress: do 80% of your runs at low intensity, run short distances often, and listen to your body for recovery.
A couple of things that contributed to my newfound love for running were the fact that I could listen to a lot of podcasts and music during it (so I didn't get bored), and that it's super convenient to throw on some gear and go for a quick 30-45 minute run every day, at any time. In that respect it's way more convenient than cycling which takes me quite a while of prep before I am ready to go.
As my running fitness increased, so did my distances and paces. My favorite pace at the moment is 9:30 min/mile (5:57 min/km). If you're a seasoned runner this probably doesn't sound like much, but it's a pace that I can run comfortably for miles and miles.
At the same time, I started to enjoy longer distances. My daily 4 mile run became 5 miles and most days I wanted to run more. I ran quite a few half-marathon distances. I thought that sometime in 2024 I would be ready for a full marathon. Well, it happened a lot sooner than I had anticipated. One day in early December I decided to go for it and I completed the full distance in 4h 30m. Pretty good for a debutant, especially considering it was a trail run and quite muddy over a long stretch. This was a solo run - not a race.
Also in 2023 I learned how to swim properly in terms of technique and endurance, all from watching YouTube videos and applying the theory. So now I'm a much stronger swimmer who can go longer distances without becoming winded.
Health was good otherwise, except for minor sports injuries that recovered quickly.
I read more books this years, by applying some discipline in the form of reading one chapter every night before bed. I really enjoyed The Broken Earth trilogy by N.K. Jemisin, Project Hail Mary by Andy Weir, and an old classic (this time I hope to read the entire series) Ringworld by Larry Niven.
I don't watch TV in the traditional sense, but I do stream a fair amount of movies and TV shows.
Some of the movies I liked in 2023 are: Glass Onion A Knives Out Mystery (2022), The Menu (2022), The Whale (2022), Tetris (2023), Dungeons Dragons Honor Among Thieves (2023), Creed III (2023), The Super Mario Bros. Movie (2023), John Wick Chapter 4 (2023), Polite Society (2023), BlackBerry (2023), Nimona (2023), Asteroid City (2023), Big Bug (2022), Spider-Man Across The Spider-Verse (2023), Bottoms (2023), Totally Killer (2023), The Equalizer 3 (2023), Jules (2023), Oppenheimer (2023), Killers Of The Flower Moon (2023), The Holdovers (2023), Finestkind (2023).
I also watched individual seasons from various TV shows. Excellent ones include: Wednesday S1 (2022), The Mandalorian S3 (2023), The Sandman S1 (2022), One Piece S1 (2023), Foundation S1,S2 (2021-2023), The Fall of the House of Usher S1 (2023).
2023 was a gaming resurgence. Some of these games aren't new, I just happened to buy and play them this year. I have a huge backlog of games, so I can wait very patiently to buy them during sales.
On PC/Mac I really enjoyed: Last Epoch, Halls of Torment, Satisfactory, Death Must Die, Hardspace: Shipbreaker, ΔV: Rings of Saturn.
On Switch I had fun with: Cult of the Lamb, Loop Hero, Hades.
In 2023, Twitter was dead to me. I stopped posting anything of value. I'm also on Mastodon, but to be honest the death of Twitter has completely demotivated me from participating in any kind of social media. I'm also on BlueSky but that place seems dead.
I finished the year with 594 followers on Twitter and 106 followers on Mastodon (double than the end of 2022).
Development stuff
Last year I said I was excited about new versions of various tech stacks. I've reached a tipping point and I just wish things would slow down for a change. I'm tired of the never-ending release cycle. My wish is for everyone to take a year off from new releases (who am I kidding tho?).
Technology
This year ChatGPT has been omnipresent, and we've been building heavily on it at work. Going forward, I would also like this space to slow down a little because the implications are disquieting. Again - who am I kidding - change will happen with or without my approval.
Projects
I don't have any specific personal projects in mind. At the very least I'd like to finish or continue polishing the ones that are still WIP after many years.
Games
I have a feeling I'll be gaming more in 2024. I'm burnt out on work and side projects and I need some long-term R&R. I already have a few queued up on the PC: Witchfire, Outer Worlds, Noita, We Who Are About To Die, Astral Ascent, and on the Switch I might buy The Legend of Zelda: Tears of the Kingdom.
Health and fitness
Being healthy is good and I'd like to continue that. Likewise with being in shape. In 2024 I hope to expand my running volume (cumulative distance) and keep it on an even keel with cycling and swimming.
That about wraps it up for now. Once again, thank you for your readership dear friends, and lets make 2024 the best year of the 20s yet!
]]>All the names the locations are redacted for privacy.
I've been visiting a place in the USA that is popular with tourists. A local friend mentioned that they found a lost GoPro a few months previously, but they had no idea how to use it, so they asked me take a look at it.
The camera in question is a GoPro 11 Black. For perspective, I too have a GoPro, but it's the 1st generation. The newer ones are really cool with their touchscreen and other modern tech.
First thing I did was to power on the GoPro and have a look at the menus. I was completely unfamiliar with the UI and this programmer doesn't start by RTFM, yet I felt right at home.
I wanted to find the serial number to see if it had been reported lost/stolen. So I went to Preferences > About > Camera Info
and under Camera Name
I read Smith HERO11
.
I thought, "huh, Smith
sure sounds like a last name". Apparently the owner labeled it when they set it up. That's a clue!
I plugged the MicroSD card into my computer and browsed the photos and videos. Standard stuff shot by an American man and his family on vacation in the same place I had been.
Now I had a face - this must be Mr Smith. But a name and a face don't make it much easier to locate the needle in the haystack.
After the high-level skimming of the photos (I ignored the videos because who has time for that), I decided to take a closer look at the details, in particular clothing accessories and other personal items that might provide some identifying data.
Sure enough, in one photo Mr Smith was wearing a hat with a company logo "ACME Corp". This was not a company most people had heard of, so I thought there might be a good chance Mr Smith worked there. Clue number 2!
Armed with a company name and a last name, I searched LinkedIn for "ACME Corp". Bingo!
I went to the (thankfully short) list of employees and the first person I see is a Mr Bob Smith. He bore a resemblance to the man in the photos, yet looked a lot younger. Reading his bio, I realized he was too young to be the same person.
Slightly dejected, I went back to the employee list and continued scrolling. Not much farther down, I see Mr Bill Smith who looked like a 90% match. I opened up his profile and right away I knew he was the camera owner. He even had a clearer photo from an event he had attended, and there was no doubt it was the same person.
Owner located!
So what about the first Mr Smith - Bob? I believe he might be Bill's younger brother based on resemblance, and it so happens that they both work at the same company.
This, in theory, is the easy part. I messaged Mr Smith on LinkedIn that I found his camera.
Unfortunately, there's a snag. Mr Smith did not have any LinkedIn activity in the past year. So it's unlikely he will respond (soon). He also has a profile on Twitter, but it's locked down with little activity.
I will try other avenues for contacting Mr Smith. It those fail, then I guess finders keepers. My friend will get to hang on to, and use the GoPro. I will make a backup of Mr Smith's media and keep it in case I find a way to reunite him with it.
If you read this by any chance, Mr Smith & associates, please check your LinkedIn messages, or message me on Twitter or Mastodon . There's also a Contact form that I barely check. The same applies to anyone who knows someone who's lost a GoPro 11 Black in recent months.
P.S. I'll make sure to ask some relevant questions to validate the actual owner - partly why I didn't give any personal details.
I'll update this if I manage to contact the owner.
I'm happy to report that I was finally able to reunite the owner with the GoPro.
To follow up on step 5, after waiting a while for Mr Smith to respond on LinkedIn, I noticed that the younger Mr Smith (brother?) had recent activity, so I messaged him with a summary of why I was trying to contact the older Smith.
Sure enough, Mr Smith (the owner) signed in shortly, confirmed ownership, and gave me a home address.
I mailed the GoPro and he received it without issues. I'm very happy that I was able to help.
I'll leave you with a parting tip. During this investigation, various people online were recommending putting a contact.txt
file with your info in the memory card of a camera or other device that might get lost. Very common-sense advice that hadn't occurred to me until now.
Along with that, Laravel Folio and Laravel Volt were also released. I won't rehash what they are, just follow the links if you don't know already.
Suffice to say, a lot of people started experimenting with these three right away. I was one of them. You can find other NativePHP experiments at Awesome NativePHP.
NB: I made this a while back, but I'm just now getting around to writing about it.
During that time, the air quality in my area of the United States got really bad from the wildfires in Canada. What better way to experiment with NativePHP (and Folio, and Volt) than to build a simple desktop taskbar app that would show me the AQI (Air Quality Index) for my zipcode, at a glance?
So was born AQI Desktop. Find the source code on GitHub.
I wanted to limit the scope of this mini project in order to release v1 quickly. I am notorious for dragging on a release version until it reaches an unspecified amount of polish.
I used the AirNow API to get the AQI data. It requires registering at the AirNow portal for an API key.
The main requirements for v1 were:
The History tab.
I added some visual tweaks to make it look nice and called it a day.
It's not perfect - I could continue hacking on this, but I moved on. It's good enough as a v1, for what it is.
I think that building desktop apps with PHP is absolutely sick! The cool thing is that it's framework-agnostic, so you can install NativePHP in any PHP project, and it will transform it into a desktop app.
Currently, though, it is not even half-baked. The developers have taken a break from it, and I don't blame them. It's not easy to build something like this. The project hasn't been updated since Laracon, and there are a lot of issues and pull requests that have been sitting there for months.
I wouldn't recommend it for anything serious, but it's a fun experiment, and it can definitely be used to make all kinds of small utilities for personal use, even in its current state.
I am really happy NativePHP exists and I do hope development will continue, because I think it has a lot of potential, and a lot of PHP developers would prefer using it over Electron or Tauri.
]]>Now, I'll dive deeper into how I used spatie/laravel-data for the ItemData
DTO, plus some custom data casts for the CSV columns.
A DTO is a simple class that holds data. It's a great way to encapsulate data and pass it around the application.
Imagine receiving a CSV with inconsistent column data. Some columns could be empty, or contain data in a different format than my database likes. The spatie/laravel-data
package helps with transforming and validating the data into a consistent format that I can then dump into the database.
For this application I needed a single DTO class which I created in app/DataObjects/ItemData.php
These are the contents of the class:
namespace App\DataObjects;
use App\DataCasts\CommaSeparatedStringToArrayCast;
use App\DataCasts\DollarStringToIntCast;
use App\DataCasts\DateStringToCarbonImmutableCast;
use App\DataCasts\StringToNullCast;
use Carbon\CarbonImmutable;
use Spatie\LaravelData\Attributes\WithCast;
use Spatie\LaravelData\Data;
class ItemData extends Data
{
public function __construct(
#[WithCast(DateStringToCarbonImmutableCast::class, 'm/d/Y')]
public CarbonImmutable|null $received_on,
#[WithCast(DateStringToCarbonImmutableCast::class, 'm/d/Y')]
public CarbonImmutable|null $ordered_on,
#[WithCast(StringToNullCast::class)]
public string|null $brand,
public string $model,
#[WithCast(DollarStringToIntCast::class)]
public int $price,
public string $with_tax,
#[WithCast(StringToNullCast::class)]
public string|null $store,
#[WithCast(CommaSeparatedStringToArrayCast::class)]
public array $tags,
public string $notes,
) {
}
}
In part 1 I was mapping each row of the CSV to a DTO like this (where $rowProperties
is an array of column values):
$itemData = ItemData::from($rowProperties);
The cool thing about this is that certain columns that I didn't want to include (like days
, years
, months
, age
) are automatically ignored, because they are not defined in the DTO.
The spatie/laravel-data
package uses PHP attribute notation to apply custom casts to data properties. I'm not casting every column, just the ones that need to be transformed into a specific format.
Here's what each of my casts looks like. All casts are under namespace App\DataCasts;
, and all import the Spatie\LaravelData\Casts\Cast
interface, as well as the Spatie\LaravelData\Support\DataProperty
class.
DateStringToCarbonImmutableCast
Transforms a "m/d/Y" string to a CarbonImmutable object, or null if empty.
use Carbon\CarbonImmutable;
class DateStringToCarbonImmutableCast implements Cast
{
private string $timezone = 'UTC'; // 'America/Chicago'
public function __construct(
protected ?string $format = null,
) {
}
public function cast(DataProperty $property, mixed $value, array $context): ?CarbonImmutable
{
if (! $value) return null;
return CarbonImmutable::createFromFormat($this->format, $value, $this->timezone)->startOfDay();
}
}
StringToNullCast
Casts empty strings to null (because I prefer a NULL in my DB column rather than an empty string).
class StringToNullCast implements Cast
{
public function cast(DataProperty $property, mixed $value, array $context): string|null
{
return $value === '' ? null : $value;
}
}
DollarStringToIntCast
Transforms a dollar string to cents as integer.
Example: "$1,600.72"
becomes 160072
.
class DollarStringToIntCast implements Cast
{
public function cast(DataProperty $property, mixed $value, array $context): int
{
return (int)((float)preg_replace('/[$,]/', '', $value) * 100);
}
}
CommaSeparatedStringToArrayCast
Transforms a string of comma separated tags into an array of tags.
Example: "bike, tool"
becomes ['bike', 'tool']
.
class CommaSeparatedStringToArrayCast implements Cast
{
public function cast(DataProperty $property, mixed $value, array $context): array
{
return str($value)
->explode(',')
->map(fn($tag) => str($tag)->trim()->toString())
->filter()
->toArray();
}
}
Once each CSV row has been transformed into a DTO, I can then save it to the database. The saveItem
function won't have to worry about any of the data transformations, it only needs to save the object to the corresponding tables.
$item = $this->saveItem($itemData);
Now the data is in the database, and I can use it to build the dashboard and other features. This, however, is outside the scope of this series.
I've only scratched the surface of what's possible with spatie/laravel-data
. DTOs may not make sense right away, but once they do you might find yourself reaching for the pattern more often than not.
Going back to the concept of building blocks, with Laravel you don't have to build everything from scratch. Not only does it come with its own robust (and ever-growing) set of building blocks, but the surrounding ecosystem is so vast and mature that you can find a package for almost anything. Quite often it's just a matter of gluing the right packages together with a sprinkle of custom code, and voilà, you have a functional MVP!
Here's a video from Laracon 2023 of Mr Spatie himself (Freek Van der Herten), showing off some advanced techniques using the spatie/laravel-data
package.
For context, I've been tracking bigger personal expenses in a Google Sheet for many years. By "bigger expenses" I mean material things that tend to last for a while. For example a bike, a laptop, a phone, tools, a nice pair of shoes, etc.
I wanted more insight into my spending habits: tracking expenses across different categories (or tags as I think of them), brands, and time spans (yearly, monthly), plotting expense charts, and calculating the lifetime cost of a particular item.
These are all things that an Excel expert might accomplish easily, but my hammer of choice is Laravel, so that's what I used.
The good news is that it doesn't take a lot of work to glue these blocks together to get a command-line CSV import working. The hard part is building the UI for all the complex visualizations I want to do. This series, however, will focus strictly on the import process.
In Part 1 I want to describe the command I built with Laravel Prompts to import the CSV file.
The CSV file I'm importing looks like this:
received_on,ordered_on,brand,model,days,years,months,days,age,price,with_tax,store,tags,notes
12/6/2020,,Keychron,K1 v4 87-key RGB Red switches wireless mechanical keyboard,989,2,8,18,2 years 8 months 18 days,$95.61,including tax,Amazon,,
1/11/2021,,Microsoft,Xbox Wireless Controller + USB-C cable,953,2,7,12,2 years 7 months 12 days,$53.11,including tax,Xbox store,computer,
3/8/2021,,Leatherman,Squirt PS4 Multi-Tool,897,2,5,16,2 years 5 months 16 days,$45.75,including tax,REI,"bike, tool",This is a note
Note that days
, years
, months
, age
are all calculated by a formula in the Google Sheet. I don't need them in my database, so I'm going to ignore them.
But why don't you just connect directly to the Google Sheet? you might ask. Well, mostly because I don't want to go through the hassle of setting up OAuth and all that. I just want to export the Google Sheet as a CSV file and import it. I don't need this data to be live, since I don't update the original Sheet very often.
I decided to dump the CSV in storage/app
and then my import command can find it there.
Run the command with php artisan stuff:import
.
Laravel Prompts was announced at this year's Laracon and I just had to use it. It gives the command line superpowers through enhanced interactivity.
Here's what the command looks like (imports omitted for brevity):
class ImportCommand extends Command
{
protected $signature = 'stuff:import';
protected $description = 'Import from CSV or XLS';
private ?int $userId = null; // User ID to import to
private ?string $fileName = null; // File name (CSV/XLS) to import from
public function handle(): int
{
$this->getUserId();
$this->getFileName();
$this->warn('Importing from "' . Storage::path($this->fileName) . '"');
$rows = SimpleExcelReader::create(Storage::path($this->fileName));
$bar = $this->output->createProgressBar();
$bar->start();
$rows->each(function (array $rowProperties) use ($bar) {
try {
$itemData = ItemData::from($rowProperties);
$item = $this->saveItem($itemData);
$bar->advance();
} catch (CannotCreateData $e) {
$this->error($e->getMessage());
return;
}
});
$bar->finish();
DashboardDataService::bustCache($this->userId);
return Command::SUCCESS;
}
private function getUserId(): void
{
//
}
private function getFileName(): void
{
//
}
private function saveItem(ItemData $itemData): Builder|Model
{
//
}
}
First, I use $this->getUserId()
to prompt for a user's email. I am the only user of this app, but I decided to make it support multiple users with authentication (via Breeze) from the start, just in case.
As you type, Prompts searches the database for a matching email and autocompletes it. It then returns the user's ID. It keeps prompting until a valid email is entered, or you hit "Ctrl+C" to exit.
private function getUserId(): void
{
do {
$userId = search(
label: 'What is the user\'s email?',
options: fn(string $value) => strlen($value) > 0
? User::where('email', 'like', "%{$value}%")->orderBy('email')->pluck('email', 'id')->all()
: [],
scroll: 10
);
$this->userId = (int)$userId;
} while (User::query()->where('id', $this->userId)->doesntExist());
}
Once I have a user ID, I use $this->getFileName()
to display a list of CSV/XLS files in the storage/app
folder. In this case there's only one.
private function getFileName(): void
{
do {
$this->fileName = select(
label: 'Select file to import (<fg=white>CSV/XLS in storage/app</>)',
options: array_values(preg_grep('/\.csv|\.xls$/', Storage::files())),
scroll: 10
);
if (!$this->fileName) {
$this->info('bye');
exit;
}
} while (!Storage::exists($this->fileName));
}
Next, I use SimpleExcelReader
to read the CSV file and return a collection of rows.
I then start a progress bar and iterate over the rows, creating an ItemData
DTO from each row, and then saving it to the SQLite database.
The saveItem()
method is a bit messy, but it does exactly what I need it to: it saves the data to the related tables (brands
, stores
, tags
, items
), all inside a transaction.
I won't go into the details of the models and their relationships because it's outside the scope of this series, but it should also be fairly self-explanatory.
private function saveItem(ItemData $itemData): Builder|Model
{
DB::beginTransaction();
$brand = null;
if ($itemData->brand) {
$brand = Brand::query()->updateOrCreate(
[
'user_id' => $this->userId,
'name' => $itemData->brand,
],
[
'name' => $itemData->brand,
]
);
}
$store = null;
if ($itemData->store) {
$store = Store::query()->updateOrCreate(
[
'user_id' => $this->userId,
'name' => $itemData->store,
],
[
'name' => $itemData->store,
]
);
}
$item = Item::query()->updateOrCreate(
[
'user_id' => $this->userId,
'received_on' => $itemData->received_on,
'ordered_on' => $itemData->ordered_on,
'brand_id' => $brand?->getAttribute('id') ?? null,
'model' => $itemData->model,
],
[
'price' => $itemData->price,
'with_tax' => $itemData->with_tax,
'store_id' => $store?->getAttribute('id') ?? null,
'notes' => $itemData->notes,
]
);
foreach ($itemData->tags as $itemTag) {
$tag = Tag::query()->updateOrCreate(
[
'user_id' => $this->userId,
'name' => $itemTag,
],
[
'name' => $itemTag,
]
);
try {
$item->tags()->attach($tag);
} catch (\Exception $e) {
// ignore exceptions if re-importing the same items
}
}
DB::commit();
return $item;
}
You might also notice this statement DashboardDataService::bustCache($this->userId);
. I won't go into the inner workings, just know that all the data displayed on the user's dashboard is cached for performance (keyed by the user id), and the cache needs to be cleared every time new data is imported.
If you're a seasoned (Laravel) developer you might be offended at the idea of building all the import logic in the import command itself. I agree that it's not ideal, but I don't care. There are a myriad ways to optimize this, but it's simply not worth it for a quick prototype where the bulk of the functionality lies in the UI.
That's it for Part 1. In Part 2 I'll dive deeper into how I used spatie/laravel-data
to create the ItemData
DTO as well as custom data casts that I used for some of the CSV columns.
If you're looking for work in a specific stack, I recommend narrowing your search to focus on sites that specialize in that stack. Likewise, there are remote-only job sites which should be high on your list, if remote is important for you.
Unfortunately, the "classic" job sites such as Monster, LinkedIn, etc have fallen behind in terms of user experience and actually helping you find what you need. There is one exception where I actually found this position, but I'll get to that in a minute.
Here's a short list of job sites that I found most helpful in my search.
LaraJobs is the official Laravel job search site and my go-to recommendation if you're set on working in this stack.
The cool thing about LaraJobs is that you're guaranteed to find Laravel jobs instead of noise and chaff, like on other sites.
Unfortunately the site seems a bit unfinished and the user experience is lacking. For example, you can't apply multiple filters at once. I can't search, say, for "TALL stack" + "Full Time" at the same time. Also, there doesn't seem a way to filter by location. Even for remote work, I would like to filter jobs on another continent.
Despite the downsides, LaraJobs is razor-focused on Laravel jobs, and is updated regularly.
LaraDir is a Laravel-focused discovery-based job directory. It's different from others on this list in that you create a developer profile and then companies can find you, so it's more company oriented. It sounds like an interesting approach, but I just came across it, so I haven't used it.
RemoteOK is a job site that focuses strictly on remote work. It's a great place to find remote tech jobs in general, but it's not limited to Laravel. So if the tech stack is not that important for you, RemoteOK is a great place to start.
RemoteOK has a lot more filters than LaraJobs. You can filter by multiple criteria, including salary and location. You can also search by keyword and sort the results by various parameters.
The site is made and operated by Pieter Levels, the NomadList guy.
Indeed is a classic job site that has been around for a long time. Surprisingly, it has been updated since the last time I was looking for a job, and it feels fresh, modern and easy to use. You can create alerts with certain keywords, and you will receive emails with matching jobs.
The filtering is very good, and helps you narrow down the results.
This is where I found my current job, and that makes me a happy user.
TL;DR LinkedIn is only useful for researching companies; don't use it as your primary job search tool.
LinkedIn used to be at the top of my list (next to Indeed) for job searching in the past. However, it has become a mess. The user experience is terrible, and the site is full of spammy content and nagging/clueless recruiters.
The email alerts are pretty awful, despite requesting to receive only the most relevant jobs. I would receive emails with jobs that were not even close to what I was looking for.
I would avoid LinkedIn for job searching, unless you're looking for a specific company.
LinkedIn, however does have one redeeming feature. It's (still) a good place to research a company.
You can make sure the company is legit, check out how many employees it has, and drill down to the individual positions of the people who work there. As a software engineer, I like to see the developers and managers who work there, and try to form a mental picture of the company culture.
Companies also have various links, posts, and information on their LinkedIn page, which can be helpful.
In general, I avoid recruiters. Most of them like to spam me on LinkedIn, but unfortunately they are only wasting my time with irrelevant positions.
The only recruiters that I will talk to are: 1) those whom I've worked with successfully in the past, and 2) those who have clearly done their research, and are presenting me with a relevant position.
I would recommend turning off the "I'm available for work" flag if you want to avoid dealing with recruiters.
Here comes a very personal, intimate, and wholly biased opinion.
Having either worked for, interviewed with, or had some form of close contact with these types of companies, they are at the bottom of my wishlist (unless they have something very special to offer).
Obviously, each of us has their own preferences, and it's up to you to decide who you want to work for.
My perception is that the Laravel market is not just doing well, but growing at a rapid pace the past few years. Ironically, the massive tech layoffs of 2022/2023 seemed to have had zero impact on the Laravel job market.
If I were to guess why this is the case, I would say that a lot of companies have started to realize how much you can get done in this stack, so Laravel has become a boon for any sort of startup that wants to iterate fast and be quick to the market. Larger companies are not immune to this either, as there are countless internal tools that can be built quickly with Laravel, even when the company hesitates to use it for their main product.
There are plenty of Laravel jobs out there for all experience levels, although this guide is biased towards senior and mid-level developers. I've had the luxury of deliberately filtering out positions that didn't agree with me in one way or another.
Another side effect of working in the Laravel ecosystem is that the community leans heavily towards remote work, thanks to several high-profile companies that have been pushing the concept for years, but also let's not forget the flexibility of the stack and plentiful documentation that facilitates asynchronous work.
I hope these pointers will help you find your next Laravel/remote job in 2023 and beyond!
]]>I wanted a reason to use SvelteKit 1.x with Skeleton UI, and this was the perfect opportunity.
So why make something about React? If you haven't picked it up from my ramblings, I've been successfully avoiding React since forever because Svelte is amazing. But life is unpredictable, and this year I got a new job where React is part of their stack. So it makes sense for me to learn React, and what better way to do it than to contrast with how the equivalent thing is done in Svelte?
The live demo is hosted on Vercel. The source code is on GitHub.
Without further ado, here are some new things I used on this project.
While I've used SvelteKit briefly pre-1.0, this is the first project where I'm using the final 1.x release.
Recently I saw this tweet from Daniel Imfeld about importing raw code from another file or component in SvelteKit and I thought I could use this technique for the React vs Svelte code examples. I'll talk about it farther down.
I heard about Skeleton UI briefly on Twitter, but my interest was really piqued when I listened to the Podrocket episode Using Svelte and Tailwind to build reactive interfaces with Chris Simmons. After browsing the docs I was sold.
Skeleton is a UI library built with Svelte and Tailwind CSS. Sort of like Tailwind UI for Svelte, if you will. It's currently in beta, but is about to be released as a stable 1.0 version soon.
Skeleton is a very ambitious project that aims to provide (almost) complete coverage for all manner of styled Svelte components you might need.
In the short time I spent using it, I found it to be very well-thought-out and easy to use. It's great for building UI prototypes fast.
I did run into a couple issues, but at least one of these was resolved, however not yet tagged. It's cool though, as v1.0 is just around the corner.
These are the Skeleton components I used to put together this mini-project:
I really dig SvelteKit's routing. Essentially the routes
directory is a collection of Svelte components that are automatically routed to based on the file structure.
Inside each directory there's a +page.svelte
file which holds the actual page content, typically paired with a +layout.svelte
file that holds the page layout.
Children directories inherit the parent +layout.svelte
file. But you can also override it by placing a +layout.svelte
file in the child directory.
What do you do if you want to use the same layout across a bunch of child directories? In my case, I wanted to use the same layout for all the hooks (the side-by-side React vs Svelte code blocks). SvelteKit makes this easy by allowing you to place all the child directories in a directory surrounded by ()
, in this example (hooks)
.
The result is that in the browser the URL will look like this: /useEffect
. If the parent had been named hooks
, the URL would have been /hooks/useEffect
, but that's not what I wanted. In other words, it's a way to "namespace" the child directories and to apply the same template to all of them while keeping the URL clean.
I placed each hook example in its own folder (useEffect
, etc) with the following contents:
+page.svelte
- empty file, just because SvelteKit requires it+page.ts
- the "API" that provides the raw code for the React and Svelte examples and passes it to the layout as propsreact.jsx
- the raw React code examplesvelte.svelte
- the raw Svelte code exampleBack to +page.ts
, these are the contents:
import type { PageLoad } from "./$types"
export const load = (async ({ params }) => {
return {
title: "useEffect",
react: (await import("./react.jsx?raw")).default,
svelte: (await import("./svelte.svelte?raw")).default,
}
}) satisfies PageLoad
This uses Daniel Imfeld's raw import technique mentioned above. The cool thing about it is I can keep each example in its own native file extension, so .jsx
for React and .svelte
for Svelte. This makes it easier to read and edit the code examples, but also works well in the IDE.
The (hooks)/+layout.svelte
template can use these properties to display the code examples:
<script>
import { page } from "$app/stores"
import { CodeBlock } from "@skeletonlabs/skeleton"
import Header from "./Header.svelte"
</script>
<!-- Page Route Content -->
<slot></slot>
<Header>
<svelte:fragment slot="header">
{$page.data.title}
</svelte:fragment>
</Header>
...
<CodeBlock code={$page.data.react} language="jsx" />
...
<CodeBlock code={$page.data.svelte} language="svelte" />
...
To tell the truth, I'm not convinced this is the best approach. I had some trouble getting slots to behave the way I wanted inside each hook's +page.svelte
, so I resorted to using +page.ts
instead, with the associated duplication. I'm sure there's a better way to do this, but I'm still learning SvelteKit.
Overall, I'm pretty satisfied with how this works. It's trivial to copy-paste each hook directory and change the title
prop and the code examples.
It took me about 2 days of casual tinkering to put this together, and I'm pretty happy with the result. Once Skeleton UI is released as a stable 1.0 version, I'll go back and fix a few things. I was glad to kick Skeleton's tires, and I'm sure it will become a staple in my toolbox.
SvelteKit is fabulous, though I'm a not very proficient in it. I can only hope that I will have more opportunities to use it in the future.
]]>GeoJSON is an open source spatial data format for encoding a variety of geographic data structures. I became aware of it just last week when I started building my newest side project, Seismic.
I liked how simple it looked, and noticed that it refreshed periodically. Poking at it in the browser dev tools revealed that it was polling a public GeoJSON feed.
This inspired me to make a free/open-source desktop taskbar app that would replicate the functionality of the website, since I was visiting it quite often at that time. I thought it would also be cool if the app could notify me when a new earthquake past a certain magnitude threshold happened.
I called the app Seismic and released v1.0 in less than a week, which is a new record for me. I'm pretty happy with the result, and now it's living in my taskbar 24/7.
Download the latest release for your platform here, or check out the source code.
From my 2023 toolbox:
Note that the app is not signed, so you might get a warning when you run it for the first time. This is because I'm not paying for a code signing certificate.
Note, also, that I've only tested it on Mac. It's very possible that it may not work as nicely on Windows or Linux. Unfortunately I don't have machines for those platforms to test it on (particularly the taskbar integration and desktop notifications).
Seismic is feature-complete for now, but I've already prepared v1.1 with optional color-coded events based on magnitude. Before that, I'm working on setting up an automatic updater, so you should be able to receive the next version without having to download it manually.
The USGS feed has a lot of metadata that I could display, so perhaps I'll add a details view for each event (if I can figure out what it all means). At the very least, I want to mark events that come with a tsunami warning.
A built-in map view would be interesting, though I'm not sure if it's worth the effort since you can already open the event location in geojson.io.
I'm very happy with how quickly I was able to reach v1.0 on this project. The reason for it is that it was a simple concept to start with, and I made sure to limit the scope to the bare minimum. I also had a lot of the tooling already in place, so it was just a matter of putting it all together.
Another positive side effect is that now I have a pretty solid taskbar app template that I can use for future projects. In fact, I have an older, unfinished app will benefit from the same treatment, as it makes for a perfect taskbar candidate. Stay tuned!
]]>Granted, the audience is small for now, but I'm hoping to grow it over time. I'm also hoping that this will encourage me to write more often. Eventually I hope to add GitHub issues as comments as well.
Figuring out how to do this from scratch on my own would have been a massive undertaking. Thankfully, I found this article by Jesse Skinner that explains in great detail how to do this.
If your blog doesn't use VueJS, you can probably stop here, although you can apply the same principles to components written in other JS frameworks. Otherwise, read on.
Jesse's solution is great, until I reached the JS part. Since this blog is not made with a JavaScript framework, I couldn't just copy/paste his front-end code and expect it to work.
I'm using Laravel Jigsaw to build this blog. The code is open source, so feel free to check it out.
I considered several solutions, but in the end I decided to go with VueJS. There was already a VueJS component previously, for the search functionality. It made sense to follow the same pattern and add a new component for the Mastodon replies.
To start, I created a new Blade partial in source/_partials/mastodon-webmention.blade.php
that I included at the bottom of source/_layouts/post.blade.php
(the template for a single blog post).
This Blade partial is just a wrapper around the VueJS component, and provides an id for the component to hook on to.
<section id="mastodon-webmention">
<mastodon-webmention page-url="{{ $page->getUrl() }}" mastodon-toot-url="{{ $page->mastodon_toot_url }}"></mastodon-webmention>
</section>
It takes two props:
page-url
: the URL of the current page (the blog post)mastodon-toot-url
: the URL of the Mastodon tootNote that $page->getUrl()
is a Jigsaw helper function that returns the URL of the current page.
The Mastodon toot url is empty initially, until I announce the published article in a Mastodon toot. Then I grab the URL and update the front matter of the blog post. The reason I need this is to be able to provide a "Discuss this article on Mastodon" link at the bottom of the blog post.
Before being able to use the VueJS component, I needed to configure it. I created a new file source/_assets/js/components/MastodonWebmention.vue
and then registered the component in source/_assets/js/main.js
:
import MastodonWebmention from './components/MastodonWebmention.vue';
if (document.getElementById('mastodon-webmention')) {
new Vue({
components: {
MastodonWebmention
},
}).$mount('#mastodon-webmention');
}
I am mounting the component to the #mastodon-webmention
element, which is the wrapper I created in the Blade partial. I'm also checking if the element exists before mounting the component, to avoid JS errors on pages that are not blog posts (they won't have this element).
Now that the component is registered, it's time to copy the code from Jesse's article and paste it in the MastodonWebmention.vue
file under the methods
section. Note that this is Vue 2.5 code so it doesn't use the composition API.
I massaged it into a VueJS-friendly format, and added some additional helper methods.
I'm rendering replies, boosts, and favorites in the same component, much in the same way Jesse's doing it, but in a slightly different order. I'm also rendering a "Discuss this article on Mastodon" link at the bottom of the component, if the Mastodon toot URL is set.
Here's the full code for the component, but you can also check it out on GitHub.
<template>
<div :class="mastodonTootUrl.length || replies.length || boosts.length || favorites.length ? 'my-4 flex flex-col gap-4' : ''">
<a
v-if="mastodonTootUrl.length"
:href="mastodonTootUrl"
class="w-full p-2 text-center text-xl text-mastodon-purple hover:text-white bg-indigo-100 hover:bg-mastodon-purple rounded font-bold"
target="_blank"
>
Discuss this article on Mastodon
</a>
<div v-if="replies.length">
<h6 class="mb-2 text-xl text-mastodon-purple font-bold">Replies</h6>
<div class="flex flex-col gap-2">
<div v-for="reply in replies" :key="reply.url" class="p-2 border-2 border-mastodon-purple rounded">
<a :href="reply.author.url" class="flex gap-2 items-center text-base text-mastodon-purple font-bold group" target="_blank">
<img :src="reply.author.photo" :alt="reply.author.name" class="w-16 rounded-lg">
<div class="flex flex-col">
<span class="font-normal group-hover:text-mastodon-purple group-hover:underline">{{ reply.author.name }}</span>
<span class="text-sm text-gray-600 font-light">{{ authorUrlToMastodonUrl(reply.author.url) }}</span>
</div>
</a>
<div class="mt-2 text-gray-900 text-sm font-light">
<p class="text-black">{{ reply.content.text }}</p>
<a :href="reply.url" target="_blank" class="block -mt-4 text-right text-mastodon-purple hover:text-mastodon-purple hover:underline">Reply</a>
</div>
</div>
</div>
</div>
<div v-if="boosts.length">
<h6 class="mb-2 text-xl text-mastodon-purple font-bold">Boosted</h6>
<div class="flex flex-wrap gap-2">
<a v-for="boost in boosts" :key="boost.url" :href="boost.author.url" target="_blank">
<img :src="boost.author.photo" :alt="boost.author.name" class="w-16 rounded-lg">
</a>
</div>
</div>
<div v-if="favorites.length">
<h6 class="mb-2 text-xl text-mastodon-purple font-bold">Favorited</h6>
<div class="flex flex-wrap gap-2">
<a v-for="favorite in favorites" :key="favorite.url" :href="favorite.author.url" target="_blank">
<img :src="favorite.author.photo" :alt="favorite.author.name" class="w-16 rounded-lg">
</a>
</div>
</div>
</div>
</template>
<script>
export default {
props: {
pageUrl: {
type: String,
required: true,
default: '/blog',
},
mastodonTootUrl: {
type: String,
required: true,
default: '',
},
},
data() {
return {
// https://webmention.io/api/mentions.jf2?target=https://yourblog.com/blog/blog-post-slug/&per-page=100&page=0}
webmentionIoUrl: 'https://webmention.io/api/mentions.jf2',
link: '',
favorites: [],
boosts: [],
replies: [],
};
},
computed: {
},
methods: {
async loadWebmentions() {
let mentions = await this.getMentions(this.pageUrl);
if (mentions.length) {
this.link = mentions
// find mentions that contain my Mastodon URL
.filter((m) => m.url.startsWith('https://indieweb.social/@brbcoding'))
// take the part before the hash
.map(({ url }) => url.split('#')[0])
// take the first one
.shift();
// use the wm-property to make lists of favourites, boosts & replies
this.favorites = mentions.filter((m) => m['wm-property'] === 'like-of');
this.boosts = mentions.filter((m) => m['wm-property'] === 'repost-of');
this.replies = mentions.filter((m) => m['wm-property'] === 'in-reply-to');
}
},
async getMentions(pageUrl) {
let mentions = [];
let page = 0;
const perPage = 100;
while (true) {
const results = await fetch(
`${this.webmentionIoUrl}?target=${pageUrl}/&per-page=${perPage}&page=${page}`
).then((r) => r.json());
mentions = mentions.concat(results.children);
if (results.children.length < perPage) {
break;
}
page++;
}
return mentions.sort((a, b) => ((a.published || a['wm-received']) < (b.published || b['wm-received']) ? -1 : 1));
},
// Transforms "https://mastodon.social/@authorname" to "@authorname@mastodon.social"
authorUrlToMastodonUrl(url) {
const parts = url.split('/');
return `${parts[3]}@${parts[2]}`;
},
},
created() {
this.loadWebmentions();
},
};
</script>
And that's about it! It's worth mentioning that I haven't touched Vue in a few years, but it felt familiar like riding a bike.
Here's what it looks like:
]]>This is not a full review, merely a short setup guide for my use-case, and a few random impressions.
I'm only listing the features I care about; for the full specs hit the official LG page above.
Resolution/Size 27" 4K (3840x2160) IPS @ 60Hz, 5ms response time
Stand Tilt/Height/Pivot (there's also a more expensive Ergo option)
Other Built-in 5W 2-channel speakers, AMD FreeSync, 60W power delivery over USB-C
Included Power adapter, USB-C data/power cable, HDMI cable
Price ~$406 on Amazon with tax and free shipping ($450 retail)
The monitor came out of the box with super-saturated colors that frankly looked horrible side-by-side to my MacBook Pro. Here's what I adjusted to bring the image down to more civilized levels. It doesn't match the MacBook's image 100%, but it comes close enough, and I'm happy for the amount of work I put into it.
System Settings > Displays > (select the LG monitor) >
If you can pick the LG 27UN850-W for a discount, at ~$400 it provides excellent value. It may not be ideal for a graphic designer, but as a coder I'm very happy with it so far.
]]>It so happens that I was curious if the domain GenericCompany.com was available. Lo and behold, it was! So I told them it would be a good idea to buy the domain right away, even if they didn't plan to use it.
My reasoning was twofold. First, they may think they don't need an internet presence now, but you never know how things change in a few years. Second, these days very few companies are so lucky anymore to find the exact CompanyName.com domain available for purchase.
When I explained it like that, my "client" was immediately on board with the idea. So, less than $100 and 15 minutes of my time later they had a brand-new 10-year domain with a generic (and super basic) landing page. Now they have a web presence, and no one else can claim GenericCompany.com. Win-win!
Here's what I did.
Domain registrar: Cloudflare
Cloudflare has recently begun to sell domains. I love them because they don't charge any extra fees so their domains are even cheaper than Google. They've also been around for a long time, protecting the web from bad actors, which builds a lot of trust in the online community. To top it off, the domain management UI is very easy to use.
Hosting: Vercel
Vercel makes it super easy to deploy static websites. The free tier is more than enough for this application.
The requirement for this "website" was to have just a landing page with the company name front-and-center, and a tagline below. Any form of dynamic content, design, mobile responsiveness, or SEO are not needed.
Initially I thought about doing it in SvelteKit with TailwindCSS for basic styling, but then I laughed out loud when I realized what a massive overkill that would be.
So I went back to most basic thing you can imagine: an index.html
file with rudimentary HTML and 1 line of JS to make the date in the footer dynamic.
Here's the code in all its majestic glory:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>Generic Company</title>
<style>
main {
height: 95vh;
display: flex;
flex-direction: column;
justify-content: center;
text-align: center;
padding: 0 2em;
}
h1 {
font-size: 3em;
}
footer {
text-align: center;
}
</style>
</head>
<body>
<main>
<h1>Generic Company</h1>
<h2>Company tagline bla bla.</h2>
<em>coming soon</em>
</main>
<footer>© <span id="year">2023</span> Generic Company</footer>
</body>
<script>
document.getElementById('year').innerText = (new Date).getFullYear()
</script>
</html>
Feel free to use it :)
First I bought the domain on Cloudflare: $91.50 for 10 years (.com
domains are $9.15).
Next, I made a private GitHub repository where I pushed the index.html
.
In Vercel I created a new project for this website, and linked the GitHub repo to it. I had to give explicit permissions to allow Vercel to access the repo. That was all the configuration I needed, since Vercel knows what to do if it encounters an index.html
in the root.
I then added a Domain to my project. I assigned genericcompany.com
to it, and it provided me with an A record and a CNAME record to configure in Cloudflare for the domain. In Cloudflare I did just that.
SSL is handled automatically, though keep reading for a little gotcha.
When you add a custom domain Vercel suggests serving the main domain on www.genericcompany.com
and redirecting genericcompany.com
requests to it. They claim that their edge network can optimize things better doing it this way. I accepted their suggestion even though I prefer the opposite. It's your call, either is fine.
The default suggested redirect is 308 Permanent Redirect
, which I also kept.
The gotcha: if you don't do anything else at this point, you might be baffled by a err_too_many_redirects
error when you load the webpage from your new domain.
Vercel provides an answer to this problem. Essentially you need to go to Cloudflare and set the SSL/TLS encryption to "Full" or "Full (strict)". I set mine to Full but Cloudflare offers a helpful analysis which recommends setting it to Full (strict) for even better performance, which is what I did.
I'm glad I had the foresight to check for my "client's" company domain availability. For negligible cost, and a trivial amount of work on my part (I didn't charge them, obviously), they now have a web presence for the next 10 years. More importantly, no one else can claim that domain in the meantime, and the client can always decide to build an actual web presence at their own convenience, with the peace of mind that their company trademark has a secure online presence.
]]>TL;DR - TALL stack (Laravel + Livewire + AlpineJS + TailwindCSS), Svelte + SvelteKit, Tauri. - database: MySQL, SQLite - situational: PostgreSQL, Inertia, Rust - blog: Writefreely
Read on for details.
Laravel is my solution for anything requiring a database. Together with Livewire, AlpineJS, and TailwindCSS I can quickly build complex functionality and appealing UI.
With 8.2, PHP has continued to evolve and stay relevant, driving a majority of the world's websites. PHP 8.3 is probably going to launch in 2023.
Laravel remains the best web framework (in my biased view) and continues to improve steadily. Version 10 is coming out in early 2023, but with new features added to the framework every week, major versions don't feel as the huge stepping stones they once were, which speaks to Laravel's maturity.
Livewire is the logical companion to Laravel for building interactive UI without much JavaScript. It goes hand-in-hand with AlpineJS for those times when you need fancier UI behavior. Livewire v3 is expected this year, but I'll admit I'm a little apprehensive because it heralds a lot of changes and deprecations, though compensating with better performance and new APIs.
MySQL 8.x is my go-to server-side database, mostly because I've been using it forever and it satisfies most of my requirements.
PostgreSQL brings powerful features that MySQL does not yet have, so I am waiting for a chance to use it for the right application.
SQLite is becoming more fashionable lately, after developers realized that a simple flat-file database can handle most simple applications (and even some complex ones), and it's very portable which makes it easy to maintain and back up.
Another thing that makes SQLite desirable is that it's a very good option for desktop apps where you need to persist data in a more permanent way.
I have already started to use SQLite on a very early-stage prototype for a side-project I'm working on, and I have plans to use it in various desktop apps.
Tailwind has been my bread-and-butter for styling the front-end since v0.7. As a full-stack dev, nothing makes me more efficient at building good-looking UI. Tailwind continues to gain new features, and I can't imagine another CSS framework overthrowing it.
For JS apps, Svelte is my jam. It's such a joy to work with, that I literally miss if after spending too much time in PHP-land.
SvelteKit is my tool for building static sites that require any kind of routing. Add one more to the list of frameworks that have finally reached v1.0 after a long beta.
Philosophically, I've abandoned the concept of a backend-driven SPA (Single Page App). With Laravel and Livewire there's just no need for it. However, if I ever needed something along the lines, I would choose Laravel with Inertia.js and Svelte, though I am open to using Vue if my employer or client required it.
Inertia has been on my radar for a very long time, and I'm itching for a reason to use it. There has never been a better time to add Inertia to the Laravel stack than now, with v1.0 officially released after a very long beta period.
I don't think much about AlpineJS. It's there when I need it, usually in tandem with Livewire. Works very well for adding small bits of dynamic functionality to an otherwise static page or site.
Sometimes I build desktop apps, usually when I feel that the app doesn't need internet access or a user account. One of the most complex apps I built with Electron is SVGX.
In 2022 I discovered Tauri and my world was flipped upside down. Right out of the box, Tauri provides a much better developer experience than Electron. It allows me to scaffold a desktop app a lot quicker (with Svelte and Tailwind cause that's what I like), and comes with batteries included.
With Tauri, I made the most complex desktop app to date, a UI client for Stable Diffusion on the Mac. The open-source Svelte-powered codebase has collected 237 stars on GitHub to date (it may not sound like much, but it's a lot to me). In addition, I made a few other small apps that remained in the prototype stage.
I never had a chance to get into systems programming, but Tauri nudged me to use Rust to an extent (Tauri uses Rust under the hood). I'll admit that it feels alien compared to the PHP/JS stack I've been using all my career, but it's interesting at the same time because it can build some seriously fast and efficient low-level programs.
I'll continue to dabble in Rust as required by Tauri, probably at an early amateur level.
In 2022 I've been thinking more seriously about starting another blog on fitness (cycling, swimming, running, nutrition, etc). The main difficulty was picking the right blog engine (ain't it always?). I think I've found the solution in Writefreely.
Writefreely is a minimalistic blog engine that can be self-hosted on a Linux box. Both of these are things that I'm looking for, so I'll give it a spin and see how it goes.
That just about wraps it up. Since I'm always learning something new, there's a good chance this list will change by the end of the year, but that's part of the fun!
]]>This list is relatively short because I follow the principle "avoid packages until it hurts". I use packages that encapsulate complex functionality and features that would take too long to implement from scratch. I also heavily favor those that are well maintained, which is why you'll see a lot of Spatie ones on the list.
php artisan clear-compiled
php artisan ide-helper:generate
php artisan ide-helper:models
php artisan ide-helper:meta
I continued this in 2020, 2021, and now, 2022.
As usual, I won't share any personal stuff except that, despite all precautions over the past 2 years, Covid 19 finally found me. I have no idea how that happened, especially since at the time I was pretty cautious about isolating, but thankfully it was a mild form and mostly over after 2 days.
Apart from that, I'm grateful for being generally healthy (see the Fitness section below) and aging gracefully.
I continue to work remotely as a full stack PHP + JS developer for (mostly) legacy code. As a useful distraction I worked in Node on a newer project for a couple of months. The company is A+ but the work leaves me unfulfilled and uninspired.
The blog has now been around for 4 years. That's cool, but I've been posting less every year. I'll continue to add interesting coding tidbits, however my motivation has been sinking for a long time.
I've been mulling over the idea of redesigning the blog, including a brand new engine. I would very much like to use SvelteKit for that. I have a list of requirements in mind - pretty much all the features of this blog, with comments in addition. A SvelteKit blog template that comes very close is SwyxKit.
Who knows, maybe 2023 will finally be the year for it, especially now that SvelteKit 1.0 is finally released. I realize how much of a lift it is though, so I won't make any promises.
Traffic-wise, I don't care enough to check.
I'll skip the popular posts section this time. Feel free to browse my articles, though some of the older ones are becoming outdated or irrelevant.
In 2022 I used less Laravel in my personal projects. Laravel remains my go-to for complex, server-driven applications, but I haven't settled on a good project yet.
I'm becoming more partial to the idea of building standalone/offline desktop apps, which is where the Tauri/Svelte stack comes in.
Here are some of the personal projects I worked on in 2022:
I described my 2022 stack at the beginning of the year, and most of it holds true with some exceptions.
In 2021 I adopted Apple Silicon for my personal and work laptops.
In 2022 I bought my first iPad, a 2021 13" Pro M1 + Apple Pencil. It's quite a big boi but I always wanted a large screen for reading comics and sketching ideas, icons, diagrams, and other amateur art.
I still use an Android phone, but eventually I know I'll switch to an iPhone as it makes everything easier when all devices share the same ecosystem. I like to hold on to my devices for at least 2 years (3+ ideally) and I'm holding out for a rumored future iPhone with a big optical zoom lens like I currently have on my Samsung S21 Ultra.
The new Apple Watch Ultra is very interesting for an amateur athlete like me, but it doesn't hold a candle to a flagship Garmin watch, at least in terms of sports/outdoors features. For that reason I don't see myself buying an Apple Watch very soon.
My only other tech purchase was a Nintendo Switch OLED. I'm very late to the table but I didn't have much of a reason for one, until this year when Diablo 2 Resurrected was released on the Switch. Diablo 2 is my #1 game of all time and, while I hate what Blizzard has become, I can't fault how D2R turned out. D2R is the D2 with modern graphics I always wanted. It works amazingly with a controller and the Switch is perfect for long-haul flights where I can easily pass away the hours.
Cycling remained my one and only activity. In 2022 I rode a total of 6400 miles (10.4K km), slightly more than the previous year.
I've acquired a 4th steed, a lightweight gravel race bike, but in 2023 I plan to sell most of my stable and focus mostly on gravel riding.
Last year I said I hoped to race a few events but it failed to materialize. My usual excuse is that I'm too lazy to prepare for the logistics of a race, not to mention driving to the location. I'd much rather hit the trails close to my house since it's way more convenient.
For 2023 I won't make any promises - I'll compete if I feel motivated enough, otherwise I'll stay the course.
Unfortunately cycling has also come with the downside of injuries. At the beginning of the year I had a crash which wasn't too bad considering, but took me out of action for 2 weeks. Later I developed a persistent sort of RSI in one arm which prevents me from enjoying the sport to the fullest. That's the way it goes with me and sports. I tend to become obsessed with them which often leads to injury from overindulgence.
Last year I said I was planning to read some comics and I stayed true to that. I read Jodorowsky's The Incal, and re-read the Tintin and Asterix books.
I don't watch TV in the traditional sense, but I do stream a fair amount of movies and TV shows.
Some of the movies I liked in 2022 are: The French Dispatch (2021), Moonwalkers (2015), Studio 666 (2022), The Batman (2022), Sonic The Hedgehog 2 (2022), The Bad Guys (2022), The Power of the Dog (2021), The Sea Beast (2022), Prey (2022), Top Gun Maverick (2022), Nope (2022), Bullet Train (2022), All Quiet On The Western Front (2022), Weird The Al Yankovic Story (2022), The Banshees of Inisherin (2022).
I also watched individual seasons from various TV shows. Excellent ones include: Our Flag Means Death S1 (2021), Severance S1 (2022), Stranger Things S4 (2022), Avenue 5 S1 (2021), Rome S1-S2 (2005-2007), House of the Dragon S1 (2022), The Lord of the Rings: The Rings of Power S1 (2022), Andor S1 (2022), The White Lotus S2 (2022).
Most of my gaming in 2022 was on the Switch, playing Diablo 2. A lot of it was done on long flights and during various downtimes.
The end of 2022 marked the end of my active participation on Twitter. While I'm only there for the dev side of things, I am not agreeable to the change in ownership. So I moved my active presence over to Mastodon. If you want to stay in touch with my dev postings, follow me there.
I finished the year with 562 followers on Twitter and 59 followers on Mastodon.
Development stuff
Laravel 10 is coming out in the spring. Livewire 3 should also be released at some point. SvelteKit 1.0 was finally released in 2022 so I expect it will continue to evolve. I'm expecting Tauri to gain a larger share of the cross-platform development pie.
I would love to dip my toes in something new, such as Svelte development with Three.js, and even learning how to make games with Godot.
Technology
AI tech such as ChatGPT and StableDiffusion will continue to improve. The future will be interesting. Will AI take everyone's jobs? I'm not too worried but I will keep a wary eye on this space.
Projects
These are some of the side projects I could work on in 2023, though realistically there's never enough time.
Health and fitness
More than anything, I want to avoid further sports-related injuries in 2023. Cycling shall continue, of course, supplanted with other activities such as running and swimming.
And that about wraps it up for now. Once again, thank you for your readership dear friends, and may 2023 be gentler on all of us than the past few years.
]]>These improvements are partly facilitated by the Rust back-end, but on the flip side you might have to write Rust code at some point. From my point of view the advantage here lies with Electron since it's built on Node/JavaScript which I'm a lot more familiar with.
Tauri does have enough cool features (and promises of things to come) to draw me in. The documentation is pretty good too. The best thing is that I was able to quickly scaffold a new project with a Vite + Svelte preset, then pull in Tailwind CSS without a hitch. With Electron this would have taken me a lot longer.
Desktop apps very often need to do more things than the front-end code can. Things like accessing system APIs such as the filesystem, a database, or making HTTP requests.
The last one is the basis of a new idea I had for a desktop app. It required an HTTP client, and to my embarrassment I failed to build a simple one in Rust in 2 hours despite GitHub Copilot helping me (or perhaps because of it).
The question that comes to mind is, why didn't I take the time to learn Rust? Well, friend, one does not simply learn Rust in a couple of hours. Rust's paradigm is foreign enough to a PHP and JS user that it would require a longer period of deep study - time that I don't have right now. Don't get me wrong, Rust is at the top of my list of future things to learn, but currently I have other priorities.
Playing around with very basic Rust inside Tauri I quickly realized that I can call system commands from my app. The Svelte front-end can call Rust functions and pass arguments to them. The Rust function can, in turn, call a system command and pass it those arguments.
Following the thought process, I figured that I could just as well use Rust to call a PHP executable in the form of a PHAR. So I could build my HTTP client in PHP, package it into a PHAR file which I would then bundle with the Tauri app and boom, mission accomplished!
Hold your horses, this is an imperfect solution. I'll get into the weeds of how all this works but feel free to skip to the end if you want to hear the drawbacks.
Next I'll explain the basic concept for 2-way communication between Tauri and a PHP app via serialized JSON data.
As I mentioned previously, the front-end can call Rust functions with arguments. Taken directly from the Tauri docs:
invoke('my_custom_command', {invokeMessage: 'Hello!'})
The Rust function then calls a .phar
command and forwards the arguments.
The .phar
code accepts the command, parses the argument, does whatever logic it needs, then returns a
serialized/stringified JSON object back to the Rust function.
Finally, the Rust function returns the string to the front-end code that issued the command.
To retrieve the response on the front-end, chain a .then
to the invoke
command like so:
invoke('my_custom_command', {invokeMessage: 'Hello!'})
.then((result: string) => {
const jsonResult = JSON.parse(result)
})
And now we're back in familiar territory, so we are free to do whatever we want with the response object.
I haven't worked a lot with PHP executables in the past (apart from consuming them) so I wasn't in the mood to build one from scratch. Thankfully there's an excellent and powerful package named Box that can automate the build process.
On the Mac I used
the Homebrew installation, so I can run
it from anywhere in the command line with box
.
To compile a PHP project simply navigate to the project at the command line and run box compile
. It will generate a
PHAR binary named the same as the entry point script. So if your app's entry point is index.php
the binary will
be index.phar
.
The beauty of Box is that it can compile anything with zero config (though you can certainly tweak the configuration in great detail), from a simple 1-file PHP script to a full-blown Laravel app.
My advice, though, is to stick to the basics if you don't need the full power of a framework since it will have an impact on the file size of the PHAR.
There are 3 methods that I recommend:
Plain PHP with Composer. Use composer init
and follow the prompts to quickly scaffold a new project structure. Pull
in as few dependencies as you can get away with (ideally none) and rejoice in the tiny bundle size.
Symfony console component. If you're a Symfony dev this is an excellent choice, especially since Symfony components are a solid backbone for a lot of other frameworks including Laravel. Unfortunately I have zero experience here so there's not much I can say.
Laravel Zero - a powerful Laravel-based CLI framework by Nuno Maduro, Laravel core team member. This one's very powerful, and has excellent documentation. It would be my go-to if I wanted to build something more complex than option #1. Furthermore, Laravel Zero includes Box by default so you don't need to install it separately.
To keep it simple, I'll create a basic PHP project which takes a string argument when invoked, and responds with a JSON encoded string.
mkdir php-example && cd php-example
composer init
This scaffolds an fresh project with a composer.json
that looks like this (I changed the default generated namespace):
{
"name": "breadthe/php-example",
"description": "Example app that accepts a string argument and returns JSON encoded data",
"type": "project",
"license": "MIT",
"autoload": {
"psr-4": {
"App\\": "src/"
}
},
"require": {}
}
Run composer install
.
Finally, create a index.php
file in the root of the project as the entry-point script with the following:
<?php
require __DIR__ . '/vendor/autoload.php';
// Get the first argument
$argument = $argv[1] ?? null;
if (empty($argument)) {
echo json_encode([
'error' => true,
'message' => 'Argument expected',
]);
return;
}
echo json_encode([
'error' => false,
'message' => "PHP says hi and thanks for the message [$argument]",
]);
return;
That's it on the PHP side. Now, this is the simplest example I could think of. If you want to pass more than one
argument you could use $argc
to count the total arguments, then loop through them. Complex logic, additional classes,
services, etc. would then go into src/
.
Assuming you have installed Box globally on your system, all you need to do is
run box compile
inside the project folder.
The result is a index.phar
file that you can execute with ./index.phar some_argument
. It will
output {"error":false,"message":"PHP says hi and thanks for the message [some_argument]"}
, or {"error":true,"message":"Argument expected"}
if you don't specify
an argument. Later, Rust will run this file and capture the output.
First install the prerequisites which include Rust and a bunch of dependencies.
Also install the Tauri CLI:
# either
npm install --save-dev @tauri-apps/cli
# or
cargo install tauri-cli
I prefer Vite + Svelte:
npm create vite@latest
#✔ Project name: … tauri-vite-php
#✔ Select a framework: › svelte
#✔ Select a variant: › svelte-ts
npm install
If using Svelte, update the vite.config.ts
file like so (the Tauri docs omit the Svelte plugin):
import {defineConfig} from 'vite'
import {svelte} from '@sveltejs/vite-plugin-svelte'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [svelte()],
// prevent vite from obscuring rust errors
clearScreen: false,
// Tauri expects a fixed port, fail if that port is not available
server: {
strictPort: true,
},
// to make use of `TAURI_PLATFORM`, `TAURI_ARCH`, `TAURI_FAMILY`,
// `TAURI_PLATFORM_VERSION`, `TAURI_PLATFORM_TYPE` and `TAURI_DEBUG`
// env variables
envPrefix: ['VITE_', 'TAURI_'],
build: {
// Tauri supports es2021
target: ['es2021', 'chrome100', 'safari13'],
// don't minify for debug builds
minify: !process.env.TAURI_DEBUG ? 'esbuild' : false,
// produce sourcemaps for debug builds
sourcemap: !!process.env.TAURI_DEBUG,
},
})
Inside the Vite project folder scaffold the Tauri/Rust part of the project with the following options:
# either
npm tauri init
# or
cargo tauri init
#✔ What is your app name? · tauri-vite-php
#✔ What should the window title be? · tauri-vite-php
#✔ Where are your web assets (HTML/CSS/JS) located, relative to the "<current dir>/src-tauri/tauri.conf.json" file that will be created? · ../dist
#✔ What is the url of your dev server? · http://localhost:5173
In src-tauri/tauri.conf.json
update the build
block to:
{
"build": {
// this command will execute when you run `tauri build`
"beforeBuildCommand": "npm run build",
// this command will execute when you run `tauri dev`
"beforeDevCommand": "npm run dev",
"devPath": "http://localhost:5173",
"distDir": "../dist"
},
Also update the build identifier from the default com.tauri.dev
to a unique reverse-domain string:
{
...,
"tauri": {
...
"bundle": {
...
"identifier": "com.tauri-vite-php.dev",
To run Rust commands from JavaScript an additional dependency is required:
npm install @tauri-apps/api
Run the app in dev mode:
# either
npm run tauri dev
# or
cargo tauri dev
To build for production use npm run tauri build
or cargo tauri dev
.
If everything went well this is how the new app looks:
In App.svelte
(or whatever file is the main entry-point to your front-end) I replaced the generated HTML with:
<button on:click={sayHiToRust}>Say hi to Rust</button>
In the JS section:
import {invoke} from "@tauri-apps/api/tauri";
let rustResponse: string = "";
function sayHiToRust() {
invoke("say_hi", {name: "Rust"}).then(
(response) => (rustResponse = response)
);
}
Next we're adding the Rust function that will handle the front-end request. In src-tauri/src/main.rs
:
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![say_hi])
...
#[tauri::command]
fn say_hi(name: String) {
println!("Hello {} from JS!🥳", name);
}
Looking at the dev console we should see Hello Rust from js!🥳
.
Now that we can pass data to Rust, let's get data back from it.
We'll extend the Rust function a bit to return a string:
#[tauri::command]
fn say_hi(name: String) -> String {
println!("Hello {} from JS!🥳", name);
let output = "Hi back from Rust".to_string();
output
}
Back in App.svelte
add to the HTML:
{#if rustResponse}
<div>Rust response:</div>
<div>{rustResponse}</div>
{/if}
Clicking the button will now display Rust response: Hi back from Rust
.
So now we have 2-way communication between the front-end and Rust.
I went ahead and improved this a bit by adding a text field where you can type a message that will be sent to Rust. If you type "bla bla" Rust console will now say JS says: bla bla
. I won't show the changes here but you can inspect them in the repository for the complete code.
Now let's wire up PHP to Rust, and start by copying the index.phar
created earlier to src-tauri
.
Back in src-tauri/src/main.rs
(accounting for the changes mentioned at the end of the previous section):
#[tauri::command]
fn say_hi(message: String) -> String {
println!("JS says: {}", message);
// execute the index.phar binary
let output = std::process::Command::new("./index.phar")
.arg(message)
.output()
.expect("failed to execute index.phar");
// convert the output to a string
let output = String::from_utf8(output.stdout).expect("failed to convert PHP output to string");
output
}
Basically we're using Rust to execute a command and pass it an argument which is the message from the front-end. We then assign the output of the command (in this case a JSON string) to the output
variable and returning it to the front-end which now displays Rust response: {"error":false,"message":"PHP says hi and thanks for the message [hey]"}
.
Back in JS we can use JSON.parse
to transform the PHP response back to a JSON object.
Here's what the final demo looks like after I added a few more bits and pieces (ignore the lack of styling):
You can find the Tauri repo here and the PHP repo here. Note the Tauri repo already contains index.phar
but feel free to rebuild it if you want.
While there was a very positive reaction when I tweeted about this technique, it's not all roses and butterflies. Here are some of the drawbacks.
./index.phar
I still have a feeling that PHP is required. I haven't found much online to confirm or deny this. I don't have a dev machine without PHP to test this on either.index.phar
as a dependency. I hope to figure that out soon, since it's critical for what I'm planning to build.I'll admit that I'm quite enjoying this experiment. It may not lead anywhere, but it's still a valuable example of thinking outside the box. The issues I encountered might even push me sooner to learn some Rust.
I can see this technique being especially useful for building desktop tools for PHP developers. In this scenario the requirement for the PHP runtime might not be a deal-breaker.
I hope you enjoyed this guide and let me know on Twitter what you think.
]]>What follows is a list of notes I made while reviving BankAlt. It is by no means an exhaustive guide, rather a list of steps I took to declare myself satisfied with the outcome.
BankAlt was created circa 2009 when it was still acceptable to roll your own framework. As a result, it is based on a PHP mini-framework someone at work had written. I simplified it even further since I didn't need all the features of the original.
The code is all procedural, which is fine for such a small project. It does not implement MVC.
Why revive this project at all? Why not let it fade into memory? Nostalgia. BankAlt was a labor of love and very dear to me at the time. Besides, I was curious how much of a lift it would be to update it to a modern 2022 stack.
Main goals:
Bonus goals:
Super bonus goal (that will likely never be attempted): rebuild it on Laravel with Livewire.
Note All my coding is done on a Mac, so if you're on a different platform this won't apply.
Since I'm already using Valet for my local environment, I wanted to use it for this as well. You can run pretty much any PHP project locally with it.
For the local database server I used DBngin running on localhost.
Valet uses Nginx as the webserver.
Originally, I built BankAlt on a Windows machine. The code was versioned using SVN, specifically TortoiseSVN.
Before switching to Git, I wanted to get rid of all traces of SVN. To do this, I deleted all the .svn
folders from the codebase. Unlike Git which creates a single .git
file in the root of the project, SVN creates a .svn
folder in each nested folder that is under version control.
Next, add the entire project to Git by running git init
, followed by git add . && git commit -m "init"
Add a .gitignore
file in the project root with the following contents:
.env
/vendor
/storage
.env
contains actual credentials, so it should never be under version controlvendor
is Composer's default package vendor location which, under 99.99% of situations, should not be version controlledstorage
in this case contains only one subfolder logs
which should not be version controlled for obvious reasonssPush to GitHub.
Phew, now I can safely start slicing and dicing the codebase.
One of the first thing I did, just because it was griding my gears, was to remove all the PHP closing tags from the code. You see, back in the day it wasn't universally agreed upon whether to use the closing tags or not. So now I deleted all the ?>
tags and made sure every .php
script ended in a blank line.
Another big legacy faux pas was to hardcode the database credentials in the code AND VERSION CONTROL IT. So that had to be refactored pronto to a modern .env
file.
First I tried to build my own .env
parser. Very soon I realized that it's harder than it sounds, so I decided to use an off-the-shelf popular package called vlucas/dotenv.
But wait, this requires Composer. Fast forward... After adding Composer:
// call this after the autoloader
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
Now access .env
variables from anywhere in the code with $_ENV['DB_HOST']
etc.
I also added a .env.example
file to the project root containing:
APP_NAME=BankAlt
APP_ENV=local
APP_DEBUG=true
APP_URL=https://bankalt-2022.test
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=
DB_USERNAME=root
DB_PASSWORD=
Notice how it looks identical to a Laravel project. This is by intent.
Run composer init
in the project root.
Allow it to add PSR-4 autoloading from src/
.
Originally the project had an entrypoint via index.php
in the root, with logic inside module
and include
folders. I moved those two folders to src
.
Add this line at the top of index.php
:
require __DIR__ . '/vendor/autoload.php';
Here's what composer.json
looks like:
{
"name": "xxx/xxx",
"description": "2022 edition of the original BankAlt.com code",
"type": "project",
"autoload": {
"psr-4": {
"App\\": "src/"
}
},
"authors": [
{
"name": "xxx",
"email": "xxx@xxx.xxx"
}
],
"require": {
"vlucas/phpdotenv": "^5.4"
}
}
The original project structure was a bit archaic:
.svn
css
data
images
include
module
index.php
application.inc.php
I changed it to more closely resemble a Laravel project:
public
folder and move old css
, images
, include/js
to it.images
to img
.application.inc.php
(aka bootloader) to src
.include
folder to src
.The new structure looks like this:
.git
public
src
storage
vendor
.gitignore
.env
.env.example
composer.json
composer.lock
index.php
Relocating the CSS + JS assets broke all the static links, so I had to search/replace the paths globally.
define
constants with const
Read this excellent explanation on why you might prefer const
over define
, especially in a modern codebase.
define('_DIR_MODULE', _DIR . 'module/');
// =>
const _DIR_INCLUDE = __DIR__ . '/include/';
Thank goodness for keeping solid backups of ALL the data, including stored procedures! The zip archive contained the entire codebase, graphic assets, full database backups with the SQL tables, stored procedures, and functions, as well as raw design assets (PSD, etc) that should have lived elsewhere, but ultimately I was glad I had them all in one place.
The app is heavily reliant on stored procedures, 43 in fact. Those were the days when I preferred to put a lot of the business logic in stored procedures.
I created a new MySQL 8.1 database and restored from the backup easily. There were no issues restoring SQL exported from MySQL 5.1. I also removed unused stored procedures (present in the DB but not used in the code). Trimmed them down from 43 -> 16.
Later I realized that a stored procedure was also using a stored function. Running it gave a cryptic error.
This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)
A quick web-search later yielded these 2 solutions (I chose #1, as #2 appears to be less safe):
/* before */
DELIMITER $$
CREATE FUNCTION `bla_bla`() RETURNS varchar(255) CHARSET utf8
-- function body
/* after - solution 1 */
DELIMITER $$
CREATE FUNCTION `bla_bla`() RETURNS varchar(255) CHARSET utf8 DETERMINISTIC
-- function body
/* after - solution 2 */
DELIMITER $$
SET GLOBAL log_bin_trust_function_creators = 1;
CREATE FUNCTION `bla_bla`() RETURNS varchar(255) CHARSET utf8
-- function body
With respect to keeping dependencies to a minimum I decided to throw together a quick logging class. I made it a singleton, and it does one thing only: appends a new line to the error.log
file.
namespace App;
/**
* Barebones error logging class
*
* Location: <project root>/src/Log.php
*
* Usage:
* App\Log::error('Your error message');
*
* Output (<project root>/storage/logs/error.log):
* [2022-04-02 16:20:46] Your error message
*/
class Log
{
private const PATH = __DIR__ . '/../storage/logs';
private const FILENAME = 'error.log';
private static self|null $singleton = null;
protected function __construct()
{
self::createStorageFolderIfNotExists();
}
public static function singleton(): ?Log
{
if (self::$singleton === null) {
self::$singleton = new self;
}
return self::$singleton;
}
public static function error(string $message): void
{
(new self)->writeError($message);
}
private static function createStorageFolderIfNotExists(): void
{
if (!file_exists(self::PATH)) {
mkdir(self::PATH, 0755, true);
}
}
private function writeError(string $message): void
{
$line = '['. self::timestamp() . '] ' . $message;
$filename = self::PATH . DIRECTORY_SEPARATOR . self::FILENAME;
file_put_contents($filename, $line, FILE_APPEND);
}
private static function timestamp(): string
{
return date('Y-m-d H:i:s');
}
}
Feel free to use it in your own code. Here's the gist.
Now that the stored procedures are in place more pieces of the site are beginning to work. The professions pages (containing datatables) work nicely but all the images are missing.
These come from JS/Ajax, so the src paths need to be updated in all the .js
files. What used to be /images/
is now /img/
.
addslashes
deprecation... the lazy wayThere was liberal usage of addslashes($some_string_variable)
throughout the app (in ~100 places). Unfortunately I was getting "Deprecated: addslashes(): Passing null to parameter #1 ($string) of type string is deprecated" errors in various places. I guess in PHP 5.X it wasn't a problem to pass null
to this function, but 8.1 complains.
Sometimes it's ok to be lazy to quickly fix or get around a problem. Instead of putting null checks in all 100 places I opted to create a similarly-named global function called __addslashes
with built-in null checking. Then I did a global search-replace of all instances of addslashes
with the new __addslashes
function.
function __addslashes(string|null $string): string
{
return addslashes($string ?? '');
}
preg_replace
deprecationI came across a couple functions that were calling preg_replace
without handling null strings. Quick fix here: typehint and initialize the parameter with empty string, and return early if null.
// before
function blaBla($q)
{
$patterns = [...];
$replacements = [...];
return preg_replace($patterns, $replacements, $q);
}
//after
function blaBla(string|null $q = '')
{
if (!$q) return ''; // put this line at the top
//...
}
Because it is hidden from the public, I had totally forgotten about the admin/CMS section. It has tooling for managing site content such as items, images, icons and a few other bits.
Part of its functionality depends on 3rd party sites (from where I scraped item icons for example), but those links are no longer valid. That's just fine, as this site will remain frozen in time and so will the admin section.
If this were an operational project, I would strongly consider the following:
msqli
with PDO
One day when I'm very old and bored I might re-build BankAlt from scratch with Laravel/TALL stack, but for now the effort is not justified.
I went into this revival with general goals in mind but unsure what exactly to expect. I am very happy to have accomplished 80-90% of my goals in a couple of hours of surgical hacking.
Modernizing a 10-12 year old PHP codebase can be done in stages, starting with the lowest hanging fruit, and this is what I've done here. I employed Composer, limited dependencies to just one, and updated the project structure to match Laravel. The main goal was to load the project in a browser locally and be able to navigate all the pages without errors. This mission was accomplished successfully in less time than I had anticipated.
There's always more that could be improved, but I will hang my hat up here and call it a job well done.
]]>My biggest side-project at the moment (NextBike) is about visualizing cycling data from Strava's API. It's main feature is a searchable, filterable, sortable datatable of all your cycling activities.
This table currently has 18 filters, but it didn't start like that. At first it had 2, then I kept adding more, and I will likely add even more. The table is also a Livewire component containing most of the logic. It reached 500+ lines before I decided it was time to extract certain parts to slim it down and more easily locate the associated logic.
People on Twitter have asked porque no pipeline structure or custom query builder? Sure, why not. Traits are one way of refactoring it, and likely not the best way. It's what I like right now and what works for me.
WARNING NextBike is in beta, use at your risk. The UX is not fully fleshed out and may be confusing initially. Be aware that you can delete your account with all the data if you wish.
There's more detail here than in the tweet because I want to show more types of filters. All the datatable logic is contained within the app/Http/Livewire/Rides.php
Livewire component.
use Illuminate\Contracts\Pagination\LengthAwarePaginator;
use Livewire\Component;
use Livewire\WithPagination;
class Rides extends Component
{
use WithPagination;
const PER_PAGE = 20; // rides per page, this will become a configurable property later
public $sortField = 'start_date';
public $sortDirection = 'desc';
// Filter properties
public $year;
public $bike;
public $frame;
public $stravaIds;
// Provides the data for the table
private function getRides(): LengthAwarePaginator
{
$rides = Ride::query()->with('bike')
->where(['user_id' => $this->userId])
// filter YEAR
->when(
$this->year && $this->year !== 'all',
fn($query) => $query->whereRaw(
'YEAR(start_date_local) = ?', [$this->year]
)
)
// filter BIKE
->when(
$this->bike > 0 && $this->bike !== 'all',
fn($query) => $query
->has('bike')
->where('bike_id', $this->bike)
)
// filter bike FRAME (on a relationship)
->when(
$this->frame > 0 && $this->frame !== 'all',
fn($query) => $query->whereHas('bike', fn($query) => $query->where('bikes.frame_type', $this->frame))
)
// filter STRAVA IDS (Example: 123456789,987654321)
->when(
$this->stravaIds,
fn($query) => $query->whereIn('id', Arr::map(explode(',', $this->stravaIds), fn($id) => trim($id)))
)
// + 14 other filters... this can get long
// sort the results
->when(
$this->sortField,
fn($query) => $query
->orderBy(
$this->sortField, $this->sortDirection
)
);
return $rides->paginate(self::PER_PAGE);
}
}
The reason I picked a trait and not something else is because, well, I just love traits in PHP. They are multipurpose by allowing not just multiple inheritance, but also providing a simple way to extract code and logic.
Now of course, there's the danger that one might get confused by properties and methods that don't seem to be defined in the class that imports a trait, but this can be inferred by checking which traits are imported, and/or by clicking through to the property or method definition in the IDE.
Laravel itself (and much of the ecosystem) makes heavy use of traits, and by giving them intuitive names makes it straightforward to understand their purpose. See for example the Livewire\WithPagination
trait in the code example above.
To refactor, I've extracted the conditional ->when()
parts of the Eloquent query, the filtering logic, associated properties, and methods from app/Http/Livewire/Rides.php
to app/Http/Traits/Filters/WithFilters.php
.
namespace App\Http\Livewire;
use App\Http\Traits\WithFilters;
use Livewire\Component;
use Livewire\WithPagination;
class Rides extends Component
{
use WithPagination, WithFilters;
const PER_PAGE = 20; // rides per page, this will become a configurable property later
public $sortField = 'start_date';
public $sortDirection = 'desc';
private function getRides(): LengthAwarePaginator
{
$baseQuery = Ride::query()->with('bike')
->where(['user_id' => $this->userId]);
$rides = $this->withFilters($baseQuery)
// sort
->when(
$this->sortField,
fn($query) => $query
->orderBy(
$this->sortField, $this->sortDirection
)
);
return $rides->paginate(self::PER_PAGE);
}
}
And this is how the trait looks (showing only the method containing the Eloquent condition chain).
namespace App\Http\Traits\Filters;
use Illuminate\Database\Eloquent\Builder;
trait WithFilters
{
// Filter properties
public $year;
public $bike;
public $frame;
public $stravaIds;
// Only 4/18 filters shown in this example
private function withFilters(Builder $query): Builder
{
return $query
// filter YEAR
->when(
$this->year && $this->year !== 'all',
fn($query) => $query->whereRaw(
'YEAR(start_date_local) = ?', [$this->year]
)
)
// filter BIKE
->when(
$this->bike > 0 && $this->bike !== 'all',
fn($query) => $query
->has('bike')
->where('bike_id', $this->bike)
)
// filter bike FRAME (on a relationship)
->when(
$this->frame > 0 && $this->frame !== 'all',
fn($query) => $query->whereHas('bike', fn($query) => $query->where('bikes.frame_type', $this->frame))
)
// filter STRAVA IDS (Example: 123456789,987654321)
->when(
$this->stravaIds,
fn($query) => $query->whereIn('id', Arr::map(explode(',', $this->stravaIds), fn($id) => trim($id)))
)
// + 14 other filters...
;
}
}
Note how the withFilters()
method accepts and returns an Eloquent Builder
instance, allowing it to be chained in the original/parent query. Incidentally I didn't specify the return type in the tweet screenshot.
So that's all there is to it. This pattern can be used to extract other things from a big Laravel Eloquent/DB query when it makes sense. Remember, it may not be the best method, but it could be the best for you.
]]>Sometimes you might run into a situation where a certain JavaScript or CSS feature that your browser clearly supports does not seem to work with Electron.
A good example is aspect-ratio
which is not supported by Electron 11 which runs a version of Chromium < v88. Chrome added this feature in v88.
You can use this technique for any such feature. Here's how I enabled aspect-ratio
for an Electron 11 project.
In the Electron main process entry file (src/index.js
in my case) add this line before app.whenReady()
.
app.commandLine.appendSwitch('enable-experimental-web-platform-features');
app.whenReady().then(() => {
...
]]>If you're using Laravel, Carbon is included in the framework by default. There are, however, two versions of it: Carbon\Carbon
and Illuminate\Support\Carbon
. Which one to use?
Based on my research, it appears that it's safe to use either. Personally I tend to use Carbon\Carbon
because it looks cleaner when I import it. The Illuminate
version is a wrapper around the official Carbon library, for backward/forward compatibility.
Laravel offers several date/time shortcuts. The most commonly used are now()
and today()
.
// Carbon\Carbon::now() vs now() -- they are functionally equivalent but...
Carbon\Carbon::now(); // returns an instance of Carbon\Carbon
now(); // returns an instance of Illuminate\Support\Carbon
Carbon\Carbon::today(); // returns an instance of Carbon\Carbon
today(); // returns an instance of Illuminate\Support\Carbon
// Difference between now() and today()
now(); // returns the full timestamp
// => Illuminate\Support\Carbon @1648647277 {#3867
// date: 2022-03-30 13:34:37.437085 UTC (+00:00),
// }
today(); // returns only the date part
// => Illuminate\Support\Carbon @1648598400 {#3870
// date: 2022-03-30 00:00:00.0 UTC (+00:00),
// }
When working with any date/time helper, be aware that by default the result is always expressed in UTC (universal) time. UTC is the format most commonly used to store timestamps in the database (with some exceptions).
As a consequence, using now()
to retrieve a timestamp for a user who is not in the UTC timezone might give unwanted results.
One technique to show each user their local timezone is to:
- store the timestamp in UTC for each user record
- store the local timezone for each user (Example: 'America/Chicago'
)
- perform timezone conversion as shown below to display local time
now(); // returns UTC time
// => Illuminate\Support\Carbon @1648647744 {#3864
// date: 2022-03-30 13:42:24.742152 UTC (+00:00),
// }
now()->setTimezone('America/Chicago'); // returns local time
now()->tz('America/Chicago'); // alias
now('America/Chicago'); // alias
// => Illuminate\Support\Carbon @1648647747 {#3865
// date: 2022-03-30 08:42:27.037484 America/Chicago (-05:00),
// }
today()->setTimezone('America/Chicago'); // returns local date
today()->tz('America/Chicago'); // alias
today('America/Chicago'); // alias
// => Illuminate\Support\Carbon @1648598400 {#3871
// date: 2022-03-29 19:00:00.0 America/Chicago (-05:00),
// }
now()->tz; // gets the current timezone
// => Carbon\CarbonTimeZone {#1112
// timezone: UTC (+00:00),
// }
now('America/Chicago')->tz;
// => Carbon\CarbonTimeZone {#1107
// timezone: America/Chicago (-05:00),
// }
// These are the same; all return "2022-03-30" UTC
date('Y-m-d'); // native PHP
now()->format('Y-m-d');
now()->toDateString(); // nice shortcut for the above
In Laravel Eloquent:
$userId = 123;
$user = User::find($userId)->created_at;
Let's assume we are in Hawaii and we are interviewing with someone in Melbourne, Australia. The interviewer has given us their local date and time, but we want to find out what our local time will be. Carbon can make that conversion.
// 12 PM in Melbourne, Australia
$auDateTime = 'Thursday Feb 9 2023 12:00:00';
$auTimezone = 'Australia/Melbourne';
$auTime = Carbon\CarbonImmutable::createFromTimeString($auDateTime, $auTimezone);
$localTimezone = 'US/Hawaii';
$localDateTime = $auTime->timezone($localTimezone)->toDateTimeString();
// => "2023-02-08 15:00:00"
// 3 PM in Hawaii, USA
Every developer will eventually need to calculate week/month/year boundaries (start/end dates). Carbon makes this easy.
Note Consider using timezone conversion, otherwise you might not get the desired result if the local time is only a few hours away from UTC. In other words, it might be "tomorrow" in UTC, but still "today" in local time.
$today = Carbon\Carbon::now()->setTimezone('America/Chicago');
// => Carbon\Carbon @1649218375 {#1110
// date: 2022-04-05 23:12:55.846210 America/Chicago (-05:00),
// }
$startOfWeek = $today->startOfWeek()->toDateString(); // => "2022-04-04"
$endOfWeek = $today->endOfWeek()->toDateString(); // => "2022-04-10"
$startOfMonth = $today->startOfMonth()->toDateString(); // => "2022-04-01"
$endOfMonth = $today->endOfMonth()->toDateString(); // => "2022-04-30"
$startOfYear = $today->startOfYear()->toDateString(); // => "2022-01-01"
$endOfYear = $today->endOfYear()->toDateString(); // => "2022-12-31"
Carbon offers a neat way of building date/time objects. One of them is the fluid string constructor which lets you use natural English language to build your object.
use Carbon\Carbon;
// today = 2022-04-07
// returns UTC DateTime object
// chain ->toDateString() to get only the date part
Carbon::make('first day of this month'); // 2022-04-01
Carbon::make('last day of this month'); // 2022-04-30
Carbon::make('first day of last month'); // 2022-03-01
Carbon::make('last day of last month'); // 2022-03-31
Carbon::make('first day of next month'); // 2022-05-01
Carbon::make('last day of next month'); // 2022-05-31
This technique is extremely powerful for calculating total units for a specified interval.
Here's how I used it in a Laravel project. I'm tracking bicycle rides and each record in the database has a INT duration
field in seconds. The user should be able to filter rides of a certain duration, using a user-friendly string like 1h30m
. I can use CarbonInterval
to easily perform the conversion to seconds which I can then use to query the database.
use Carbon\CarbonInterval;
// Several ways to initialize the interval:
CarbonInterval::fromString('1h30m')->total('minutes');
CarbonInterval::make('1h30m')->total('minutes');
CarbonInterval::make('1h30m')->totalMinutes;
// => 90
// Many ways to write the fluid string:
// '1h 30m'
// '1hour30minutes'
// '1 hour 30 minute'
// '1 hour 30 minutes'
// '1 hour and 30 minutes'
CarbonInterval::fromString('1h 30m')->total('seconds');
// 5400
// Also
$seconds = CarbonInterval::fromString('1hour30minutes')->total('seconds');
$seconds = CarbonInterval::fromString('1 hour 30 minutes')->total('seconds');
$seconds = CarbonInterval::fromString('1 hour and 30 minutes')->total('seconds');
$seconds = CarbonInterval::fromString('1 h 30 m')->total('seconds');
$seconds = CarbonInterval::fromString('1 h 30 m')->totalSeconds;
You can obtain the reverse as shown next. Note that the output string is not exactly the same as the input (I need to figure out if this is possible). More on cascade()
in the next sections.
use Carbon\CarbonInterval;
CarbonInterval::fromString('5400 seconds')->cascade()->forHumans(null, true);
CarbonInterval::fromString('5400seconds')->cascade()->forHumans(null, true);
CarbonInterval::fromString('5400s')->cascade()->forHumans(null, true);
// => "1h 30m"
Here are simplified examples of how you might use this with Laravel Eloquent.
/**
* Laravel Eloquent examples
* - The "1h30m" string may come from a user-friendly input
* - The duration DB column is in seconds
*/
use Carbon\CarbonInterval;
// Retrieve all rides at least 1h30m long
$seconds = CarbonInterval::fromString('1h30m')->totalSeconds;
Rides::where(['user_id' => $user->id, ['duration', '>=', $seconds]])->get();
// Retrieve rides between 30m and 2h long
$minDuration = CarbonInterval::fromString('30m')->totalSeconds;
$maxDuration = CarbonInterval::fromString('2h')->totalSeconds;
Rides::where('user_id', $user->id)->whereBetween('duration', [$minDuration, $maxDuration])->get();
One gotcha when using CarbonInterval
factors is that it will return 336 days in a year instead of 365 as most of us are used to. Carbon's month also has 28 days. How is that?
use Carbon\CarbonInterval;
// Retrieve the default factors for
CarbonInterval::getFactor('days', 'week'); // days in a week: 7
CarbonInterval::getFactor('weeks', 'month'); // weeks in a month: 4
CarbonInterval::getFactor('months', 'year'); // months in a year: 12
// => total days in a month = 7 * 4 = 28
CarbonInterval::getFactor('days', 'month'); // days in a month: 28
// => total days in a year = 7 * 4 * 12 = 336
CarbonInterval::getFactor('days', 'year'); // days in a year: 336
When you break it down like this it makes a lot of sense. There needs to be some consistency when doing date/time math and this is how Carbon achieves it. As a consequence, it might trip you up if you're not careful, as shown below. Luckily you can define your own interval factors:
use Carbon\CarbonInterval;
CarbonInterval::fromString('1 year')->total('days'); // ❌ 336
CarbonInterval::fromString('1 year')->totalDays; // ❌ 336
// Find out how many days are defined per year by default
CarbonInterval::getFactor('days', 'year'); // 336
// Set your own
CarbonInterval::setCascadeFactors([
'year' => [365, 'days']
]);
// Now you get the "real" number of days in a year
CarbonInterval::getFactor('days', 'year'); // ✅ 365
CarbonInterval::fromString('1 year')->totalDays; // ✅ 365
Cascade factor argument order is important - the last factor in the arguments array should be the one you need to operate on. Consider these two examples:
CarbonInterval::setCascadeFactors([
'month' => [30, 'days'],
'year' => [365, 'days'],
]);
CarbonInterval::getFactor('days', 'year'); // ✅ 365
CarbonInterval::getFactor('days', 'month'); // ❌ 0
CarbonInterval::make('1 year')->totalDays; // ✅ success: 365
CarbonInterval::make('1 month')->totalDays; // ❌ failure: 0
CarbonInterval::setCascadeFactors([
'year' => [365, 'days'],
'month' => [30, 'days'],
]);
CarbonInterval::getFactor('days', 'year'); // ❌ 0
CarbonInterval::getFactor('days', 'month'); // ✅ 30
CarbonInterval::make('1 year')->totalDays; // ❌ failure: 0
CarbonInterval::make('1 month')->totalDays; // ✅ success: 30
When working with CarbonInterval
it is often useful to display the resulting string in a readable, consistent format. This is where cascade()
comes in.
In the examples below, we are building a fluid string interval using simple English. If we pass a messy/sloppy string we get the same back, but wouldn't it be nice to automatically parse these units into something that makes more sense? The simplest example is 25h
to 1d 1h
. This is what cascade does.
A practical use-case can be collecting/collating different time interval units from various parts of the app or database, then transforming them into a consistent string for displaying to the end user.
/**
* CarbonInterval's cascade() method converts units
* into the next one up, when they "spill" over.
* This allows freeform fluid string conversion.
*/
use Carbon\CarbonInterval;
// Ugly and not very readable
CarbonInterval::fromString('13 months 36 days 56 hours')
->forHumans();
// => "13 months 5 weeks 1 day 56 hours"
// Let's cascade that into "standard" units
CarbonInterval::fromString('13 months 36 days 56 hours')
->cascade()
->forHumans();
// "1 year 2 months 1 week 3 days 8 hours"
// Let's get really sloppy
// The order of the units doesn't matter - same result
CarbonInterval::fromString('56 hours 36 days 13 months')
->cascade()
->forHumans();
// "1 year 2 months 1 week 3 days 8 hours"
// You can use shorthand for the constructor
CarbonInterval::fromString('25h')->cascade()->forHumans();
// => "1 day 1 hour"
It's very easy to perform any kind of date/time arithmetic with Carbon. The API is so natural that you can simply guess it without referring to the documentation. A good IDE makes that even easier.
Furthermore, there are multiple ways of achieving the same result, including syntactic sugar for common operations such as yesterday()
or tomorrow()
. Units can also be specified in singular or plural.
use Carbon\Carbon;
Carbon::today(); // 2022-04-12
// Add one or more days
Carbon::today()->addDay(); // 2022-04-13
Carbon::today()->addDays('1'); // 2022-04-13
Carbon::today()->add('1 day'); // 2022-04-13
Carbon::today()->add('1 days'); // 2022-04-13
Carbon::today()->add('5 day'); // 2022-04-17
Carbon::today()->addYear('2'); // 2024-04-12
// Subtract one or more days
Carbon::today()->subDay(); // 2022-04-11
Carbon::today()->subDays('1'); // 2022-04-11
Carbon::today()->subtract('1 day'); // 2022-04-11
Carbon::today()->sub('1 day'); // 2022-04-11
Carbon::today()->sub('1 days'); // 2022-04-11
Carbon::today()->sub('1 week'); // 2022-04-05
Carbon::today()->subMonth('2'); // 2022-02-12
// Use unit shorthand
Carbon::today()->add('1w'); // 2022-04-19
// Or a mix of units
Carbon::today()->sub('1w1d'); // 2022-04-04
// Easier shortcuts for adding/subtracting 1 day 😎
Carbon::tomorrow(); // 2022-04-13
Carbon::yesterday(); // 2022-04-11
// Don't forget timezone conversion
Carbon::today()->tz('America/Chicago')->add('1w'); // 2022-04-18
// Timestamps work too
Carbon::now()->toDateTimeString();
// => "2022-04-12 02:50:24"
Carbon::now()->add('1h1m1s')->toDateTimeString();
// => "2022-04-12 03:51:25"
// Very small units (milliseconds/microseconds) also work (PHP 7.1+ for full microsecond support), although they are hard to capture
// Here's a way to test that by adding the equivalent of 1 second
dump([
Carbon::now()->toDateTimeString(),
Carbon::now()->add('1000ms')->toDateTimeString(),
]);
// => [
// "2022-04-11 21:30:16",
// "2022-04-11 21:30:17",
// ]
dump([
Carbon::now()->toDateTimeString(),
Carbon::now()->add('1000000microsecond')->toDateTimeString(),
]);
// => [
// "2022-04-12 02:41:11",
// "2022-04-12 02:41:12",
// ]
Both offer the same API, but when operated upon:
When operating on Carbon instances, timestamps may get out of sync (drift) if you're not being careful. This can especially occur when chaining operations and assigning Carbon instances to multiple variables. Here's a practical example:
// The problem - assigning a Carbon instance to a variable
$today = Carbon\Carbon::today(); // 2022-04-14
// As you pass around the same instance and apply operations to it, the initial timestamp gets mutated
$tomorrow = $today->addDay(); // ✅ 2022-04-15
$tomorrow = $today->addDay(); // ❌ 2022-04-16
$tomorrow = $today->addDay(); // ❌ 2022-04-17
$dayAfterTomorrow = $tomorrow->addDay(); // ❌ 2022-04-18
// $today is no longer "today"
echo $today->toDateString(); // ❌ "2022-04-18"
// $tomorrow is no longer "tomorrow" either
echo $tomorrow->toDateString(); // ❌ "2022-04-18"
// --------------------------------------
// The solution - prevent drift with CarbonImmutable
$today = Carbon\CarbonImmutable::today(); // 2022-04-14
// You can operate on the original instance as many times as you want...
// ... and it will remain the same
$tomorrow = $today->addDay(); // ✅ 2022-04-15
$tomorrow = $today->addDay(); // ✅ 2022-04-15
$tomorrow = $today->addDay(); // ✅ 2022-04-15
$dayAfterTomorrow = $tomorrow->addDay(); // ✅ 2022-04-16
// Now both $today and $tomorrow are correct
echo $today->toDateString(); // ✅ "2022-04-14"
echo $tomorrow->toDateString(); // ✅ "2022-04-15"
A collection of Carbon-related resources from around the web.
]]>Note This requires MySQL v8.0+.
Grouping total distance by year-month (e.g. 2016-03
) for a specific bike is pretty straightforward...
$ridesByYearMonth =
DB::table('rides')
->select(
'start_date',
DB::raw('SUM(distance) AS total_distance'),
)
->where([
'user_id' => $userId,
'bike_id' => $bikeId,
])
->groupBy('start_date')
->orderBy('start_date')
->get();
... but it doesn't return gaps for months without rides.
There's a Laravel package for everything, and CTEs are no exception. laravel-cte is a popular, well maintained package that offers an Eloquent-like syntax for Common Table Expressions across the most popular SQL databases.
Unfortunately I wasn't able to figure out how to use this package in the short time I dedicated to it. Here's an article that might shed some light.
Personally I like to keep dependencies to a bare minimum. In this case, the same outcome can be achieved with raw SQL, though it won't look as clean as using the package. Here's how this looks in Laravel:
$user_id = 1;
$bike_id = 100;
$bindings = [$user_id, $bike_id, $user_id, $bike_id, $user_id, $bike_id];
$query = "
WITH RECURSIVE dates (
date
) AS (
SELECT
DATE(LAST_DAY(MIN(start_date)))
FROM
rides
WHERE
user_id = ?
AND bike_id = ?
UNION ALL
SELECT
DATE(LAST_DAY(date)) + INTERVAL 1 MONTH
FROM
dates
WHERE
DATE(LAST_DAY(date)) <= (
SELECT
DATE(MAX(start_date))
FROM
rides
WHERE
user_id = ?
AND bike_id = ?)
)
SELECT
DATE_FORMAT(date, '%Y-%m') AS 'year_month', COALESCE(total_distance, 0) AS total_distance
FROM
dates
LEFT JOIN (
SELECT
DATE_FORMAT(start_date, '%Y-%m') AS yearmonth,
SUM(distance) AS total_distance
FROM
rides
WHERE
user_id = ?
AND bike_id = ?
GROUP BY
DATE_FORMAT(start_date, '%Y-%m')
) AS rides ON DATE_FORMAT(date, '%Y-%m') = yearmonth;
";
$ridesByYearMonthArray = DB::connection()
->select($query, $bindings); // returns an array
$ridesByYearMonthCollection = collect($ridesByYearMonthArray); // cast to collection
I'm not very happy with how I pass the query bindings (same pair of ids repeated 3 times), but it does the job. You should also consider doing extra validation and casting on those bindings to prevent garbage data.
DB::select(...)
also works.
I hope Laravel will have first class support for Common Table Expressions at some point in the future, but for now these two techniques should suffice.
Thoughts and improvements? Hit me up on Twitter.
]]>My passion project these days is a Laravel app for cycling data from the Strava API.
I have a Charts section which plots ride distances for all or individual bikes on a bar chart, grouped by year or month.
Notice how there should be gaps in the graph for months without rides for the selected filters (this particular bike), but the graph skips those.
The SQL I'm using to generate the graph looks like this:
SELECT
DATE_FORMAT(start_date, '%Y-%m') AS 'year_month',
COALESCE(SUM(distance), 0) AS total_distance
FROM
rides
WHERE
user_id = XXX
AND bike_id = YYY
GROUP BY
DATE_FORMAT(start_date, '%Y-%m');
A typical record in the rides
table looks like this (redacted for brevity):
id user_id bike_id distance start_date
123 XXX YYY 26934.2000 2017-05-13 15:48:15
The partial result of the query is:
year_month total_distance
2016-03 35692.4000
2016-04 390209.3000
2016-05 71417.6000
# gap
2016-08 88008.1000
2016-09 88051.3000
# gap
2017-05 51819.1000
# gap
2018-05 25426.8000
2018-06 205786.6000
2018-07 66438.5000
2018-08 150242.9000
# ...
2022-03 428588.5000
Here's a DB Fiddle of the initial implementation (works in MySQL < 8.0).
But what I want is:
year_month total_distance
2016-03 35692.4000
2016-04 390209.3000
2016-05 71417.6000
2016-06 0.0000
2016-07 0.0000
...
2016-08 88008.1000
2016-09 88051.3000
# and so on...
I've been punting on fixing this for a long time, partially because I had bigger priorities and partially because I'm not a SQL expert who can find a solution in a few minutes.
Earlier in the week I saw this tweet by Tobias Petry which shed light on the problem I was facing. This led me down the path of MySQL 8.0 Common Table Expressions.
Tobias' solution wasn't a drop-in replacement for my specific use-case. Most examples I found on the web deal with 1-day intervals. It seems that most people implementing this pattern need it to track visits on websites, counters, etc. For these you typically want daily intervals. In my case you can have 2-3 rides max in one day accumulating a number of miles/kilometers, so I'm more interested in longer mileage intervals to the order of weeks, months and years.
So I spent a few hours tweaking various aspects of the CTE to fit my use-case. Here's what I came up with.
-- CTE start
WITH RECURSIVE dates (
date
) AS (
SELECT
DATE(LAST_DAY(MIN(start_date)))
FROM
rides
WHERE
user_id = XXX
AND bike_id = YYY
UNION ALL
SELECT
DATE(LAST_DAY(date)) + INTERVAL 1 MONTH
FROM
dates
WHERE
DATE(LAST_DAY(date)) <= (
SELECT
DATE(MAX(start_date))
FROM
rides
WHERE
user_id = XXX
AND bike_id = YYY)
)
-- CTE end
SELECT
DATE_FORMAT(date, '%Y-%m') AS 'year_month', COALESCE(total_distance, 0) AS total_distance
FROM
dates
LEFT JOIN (
SELECT
DATE_FORMAT(start_date, '%Y-%m') AS yearmonth,
SUM(distance) AS total_distance
FROM
rides
WHERE
user_id = XXX
AND bike_id = YYY
GROUP BY
DATE_FORMAT(start_date, '%Y-%m')
) AS rides ON DATE_FORMAT(date, '%Y-%m') = yearmonth;
I won't go over every line explaining what it does (read the MySQL docs for that), but essentially the recursive CTE generates an interval (monthly in this case) between the first and last recorded dates, then LEFT JOIN
s it against the table we want to summarize. The LEFT JOIN
ensures empty values are preserved in the results.
One interesting side effect of this approach is that the lower and upper bounds of the range will map directly to the timestamps of the first and last recorded rides resulting from the query.
Here's a DB Fiddle of the CTE solution (MySQL 8.0+).
The result is what I was hoping to achieve 🎉
year_month total_distance
2016-03 35692.4000
2016-04 105004.6000
2016-05 71417.6000
2016-06 0.0000
2016-07 0.0000
2016-08 88008.1000
And here's how the chart looks now.
Tobias Petry wrote a article about filling gaps in statistical time series results with MySQL and PostgreSQL examples and a detailed explanation. He also recently launched a free ebook on advanced databased techniques which is a must-read.
Not everything is perfect:
Query 1 ERROR: Recursive query aborted after 1001 iterations. Try increasing @@cte_max_recursion_depth to a larger value.
. This error can be triggered by changing the interval from 1 MONTH
to 1 DAY
, for example.All things considered, it solves my problem and the speed penalty is insignificant.
While I have vague apprehension about scaling for a large number of records, I am realistic to know that it's not an immediate concern. For one thing, not even professional cyclists would accumulate so many rides to trigger this error. For another, if I ever reach the point where this becomes an issue, it means I'm in an excellent situation (many users with lots of data is a good "problem" to have).
If you have ideas on how to improve it, hit me up on Twitter ideally with a DB Fiddle example.
]]>I continued this in 2020, and now, 2021.
As usual, I won't share any personal stuff except that it appears I've avoided Covid 19 successfully. Or, if I had it, it was asymptomatic.
Apart from that, I'm grateful for being healthy and aging gracefully.
I continue to be employed as a full stack PHP + JS developer working on legacy code. The work is stressful and not incredibly satisfying but I like the company and co-workers.
We've been fully remote since the pandemic started, and not going back to the office. In fact, if there's one good thing that came out of this pandemic it's that I became determined never to work in a physical location again.
The blog is 3 now 🎂. Whoopty-doo I guess. I'll admit that I've been steadily losing my blogging motivation. The problem is having to make a choice between working on side projects or writing stuff in my spare time. The side projects win most of the time.
The blog remains very useful as a permanent repository for various techniques I discover during my daily coding journey. Posting, however, has been very erratic this year and will likely continue to be so.
Holding a day job requires strict prioritization of extra-curricular activities, so I decided not to put any pressure on myself to blog on a regular schedule.
There are 2 things the blog needs:
I still haven't moved away from Google Analytics and I won't even pretend I will. It's not very high on my priority list. Traffic continues to grow, as these things go, but I just don't care enough to keep a close eye on it.
Here are the most popular articles from the last 12 months, as determined by GA:
I made a few random things in 2021 with Laravel, Svelte, and Electron.
Though I am very curious to try new programming languages such as Rust, Go, and even Python (not new, just new to me), I haven't had time. So I am sticking to the tools I know best, documented here.
2021 marked the rise of Apple Silicon. I am fortunate enough to have access to two M1 machines:
Both machines are a huge improvement over what I had previously. I like the smaller form factor, and the performance is more than I need for web development. The 14" in particular is a work of art, notwithstanding the notch.
Lest you think I'm an Apple fanboi, I use a PC for gaming and entertainment, as well as an Android phone. I do, however, strongly believe that the Mac is the best web development tool (for me).
Continuing and expanding on the trend from 2020, cycling has been my only physical activity this year - and I've done a lot of it.
I've expanded my stable to 3 bikes: road, gravel, and MTB, and I practice all 3 several times a week.
I like long rides - 60+ miles for gravel, ~30 miles for MTB.
This year I rode a total of 6300 miles (10K km) compared to 4000 miles (6400 km) in 2020.
Will I ride even more in 2022? Not necessarily. I feel I've reached a limit for how many miles I can ride in a year, not because I'm not willing or capable, but because I have other things to do outside of my day job. As is, I've squeezed as many rides in as I possibly could; then there's the weather to contend with.
If there's one thing that might change in 2022 in terms of cycling is that I'm considering participating in a couple of official events/races. I'm not the competitive type, I don't like large groups, and I prefer to ride on my own schedule, but it might be fun.
Sadly my book reading has declined this year due to strong competition from other activities. The one book that stands out is Nemesis Games (The Expanse book 5). I'm a huge fan of this series and I read one book every year, to keep up with the TV show which I also think is very good.
In 2022 I plan to read some comic books.
I don't watch TV in the traditional sense, but I do stream a fair amount of movies and TV shows.
Some of the movies I liked in 2021 are: Black Cat White Cat (1998), Raya And The Last Dragon (2021), Treasure Planet (2002), The Mitchells Vs. The Machines (2021), Seoul Searching (2015), Wrath Of Man (2021), Werewolves Within (2021), Secondhand Lions (2003), and Red Notice (2021).
I also watched individual seasons from various TV shows. Excellent ones include: What We Do In The Shadows (2019-2020), Invincible (2020), Loki S01 (2021), Reservation Dogs (2021), The White Lotus (2021), Chernobyl (2019), and both editions of Cowboy Bebop (1998 and 2021). I liked the live-action 2021 Cowboy Bebop, so sue me. I also watched about half the seasons of Veep (2012+), probably the most hilarious political comedy show.
Between side projects and cycling, I haven't had much time for gaming. To be honest my motivation to play games has been declining since I tend to prefer working on side projects when I'm in front of a computer.
Having said that, I am entering the new year in fresh possession of 3 new games:
Twitter remains my primary social media outlet. I'm a very slow grower, primarily due to not posting on a regular schedule. However I strictly post webdev-related stuff so I'm definitely worth a follow if you care about Laravel/Livewire/AlpineJS/Svelte/TailwindCSS/Electron and don't appreciate shitposting. I'm ending the year at 333 followers, which is more than I expected.
I am also present on:
On the dev front, Laravel 9 will be one of the hottest things in 2022. I'm also expecting big things from Svelte and SvelteKit especially after Rich Harris joined Vercel to work full-time on Svelte.
I'd like to launch my cycling app/SaaS to the public but it's not set in stone. I might also work on SVGX 2.0, only it would be a paid app this time. And then, of course, I have a few unfinished side-projects that I'd like to complete, as well as tons of new ideas that I'd like to try.
Health-wise, I hope to continue the trend of staying in shape through cycling. I'll be doing a lot of gravel riding in 2022, as well as continuing to improve my MTB skills.
I'll stop short of making any more predictions or resolutions.
Thank you dear friends for your readership, and may 2022 be... a year?
]]>Why not SvelteKit? Two reasons.
First, I already wrote instructions for scaffolding a SvelteKit/Vite/Tailwind site.
Second, I intend to use this as an Electron starter, for which I don't need SvelteKit.
Scaffold a new Vite project with the svelte
(or svelte-ts
for TypeScript) template:
# for TypeScript use --template svelte-ts
npm init vite@latest my-svelte-app -- --template svelte
cd my-svelte-app
npm install
npx svelte-add@latest tailwindcss
npm install
This step automates most of Tailwind's configuration, by creating pre-populated configs for postcss.config.cjs
, tailwind.config.cjs
, and filling in the required PostCSS config in svelte.config.cjs
.
Finally open app.postcss
and verify that it looks like this:
/* Write your global styles here, in PostCSS syntax */
@tailwind base;
/* Custom classes go here */
/* This will always be included in your compiled CSS */
@tailwind components;
@tailwind utilities;
That's it! Now run the site in dev mode with npm run dev
, or build for production with npm run build
.
If you want additional customizations, read on.
I'm not big on super-detailed customizations, but there are two things that annoy me right off the bat in a new Svelte project: the default line length (80 is too short), and having semicolons at the end of statements.
In addition, I favor single quotes, as well as trailing commas in objects and arrays.
So I like to add Prettier to fix those.
npm install --save-dev prettier
Then create a .prettierrc.json
file in the root of the project with the following contents:
{
"semi": false,
"singleQuote": true,
"trailingComma": "es5",
"printWidth": 120
}
Finally, add a .prettierignore
in the root file with the following contents, to ignore the build directory:
dist
Now if you hit OPT-CMD-L (on the Mac) in VS Code, it will format the code according to the Prettier rules. If you want the formatting to be automatic, toggle Text Editor > Formatting > Format On Save.
If you want to use TypeScript, you can add it with the following command:
npm install --save-dev typescript @tsconfig/svelte
Then create a tsconfig.json
file in the root of the project with the following contents:
{
"extends": "@tsconfig/svelte/tsconfig.json",
"include": [
"vite.config.ts",
"src/**/*.d.ts",
"src/**/*.ts",
"src/**/*.js",
"src/**/*.svelte"
],
"compilerOptions": {
"target": "ESNext",
"useDefineForClassFields": true,
"resolveJsonModule": true,
"baseUrl": ".",
"allowJs": true,
"checkJs": true,
"isolatedModules": true,
"sourceMap": true,
"strict": true,
"noImplicitAny": true,
"composite": true,
"module": "ESNext",
"moduleResolution": "Node",
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"noEmit": true,
}
}
Now you can change the script
tag in all Svelte components where you want to use TypeScript to:
<script lang="ts">
]]>TL;DR - TALL stack (Laravel + Livewire + AlpineJS + TailwindCSS), Svelte, Electron.
Laravel is my solution for anything requiring a database. Together with Livewire, AlpineJS, and TailwindCSS I can quickly build complex functionality and appealing UI.
With 8.1 PHP has never been better, and in recent years has risen from its slumber to make web development exciting again.
Built on top of PHP, Laravel is the best web framework that ever was (I'm biased, sue me) and continues to improve by leaps and bounds. Every time I ask myself what more is there to improve, along come a host of new features that make it even quicker to ship stuff.
Livewire is the logical companion to Laravel for building interactive UI without much JavaScript. It goes hand-in-hand with AlpineJS for those times when you need fancier UI behavior.
MySQL is the one and only server-side database I will be using in the near future, simply because I've been using it forever and I don't need anything more capable.
I can't live without Tailwind since v0.7. As a full stack dev, it's a very reliable tool for building any kind of user-facing interface, quickly.
For static sites I will absolutely reach for Svelte and SvelteKit. I get a lot of joy building front-end heavy web apps with this fast, lightweight, minimalistic framework (sorry, compiler).
Philosophically, I've abandoned the concept of a backend-driven SPA (Single Page App). With Laravel and Livewire there's just no need for it. However, if I ever needed something along the lines, I would choose Laravel with Inertia.js and Svelte.
Occasionally I build cross-platform desktop apps, such as SVGX. The easy choice is to use Electron in tandem with a preferred JS framework, and that would be Svelte in my case.
I am also exploring the possibility of integrating SQLite in a future Electron app.
I don't see my coding stack changing much over 2022. I've narrowed it down to an ecosystem centered around Laravel for back-end apps, with an offshoot around Svelte for front-end apps. Simple, fun, and super-productive.
]]>Gathered from @enunomaduro and his Types In Laravel talk
Gathered from @driesvints 's Getting The Most Out Of Cashier Stripe & Paddle talk:
Test Form Request gist gathered from @colindecarlo 's Practical Laravel Unit Testing talk
Gathered from @christophrumpel 's Why Refactoring Is The Best Tool To Write Better Code talk:
Gathered from @freekmurze 's Introduction to Snapshot Testing talk:
When building an app around Strava's API, webhooks become critical, first to avoid unnecessarily polling the API, and second to enable automatic updates for activities and user profile.
The latter is particularly important in order to abide by Strava's terms & conditions. Quote: "Per our API terms, you need to implement webhooks to know when an athlete has deauthorized your API application".
At the moment, the most popular Laravel Strava package is https://github.com/RichieMcMullen/laravel-strava. I have contributed to it previously, in order to further development of my own app.
The package, however, does not have webhook support. I am considering contributing to that, but it's a double-edged sword. Between the design difficulties and the maintenance burden, I'm not sure I want to take on that responsibility.
So in the meantime I hacked together some fairly decent, but basic, webhook functionality directly inside my Laravel 8 app.
Parsing Strava's webhook docs, the basic idea is that an app can subscribe to Strava webhooks, and also unsubscribe from it later. While subscribed, any athlete or activity event triggers an event that gets pushed to the app. The app must be previously authorized from within the Strava web UI.
For example, I finish a bicycle ride and save it on my Garmin device, which pushes it to Strava, which in turn fires off a create
event for a new activity
.
Unfortunately, that's all it does - it doesn't send any activity data, so the developer would need to make an additional request. This additional request can be done with the laravel-strava package.
Here's how the communication flow looks like:
I'll go over each in detail. For now, here's a top-level view of how I designed this feature. Keep in mind that I am not overly concerned with best practices, testing, etc. It's all experimental at this point, and made for my own needs, however you can use it as a starting point for your own projects.
I built this inside my existing Laravel 8 app in 4 main components:
app
container.artisan
commands that I can use to quickly subscribe/view/unsubscribe from the command line.GET/POST webhook
routes in routes/web.php
.Let's dig into the details.
The service class lives in app/Services/StravaWebhookService.php
. It contains methods for making/processing outbound/incoming Strava HTTP requests.
Here's the skeleton (leaving out method implementation for later).
<?php
namespace App\Services;
use Illuminate\Http\JsonResponse;
use Illuminate\Http\Response;
use Illuminate\Support\Facades\Http;
class StravaWebhookService
{
private string $client_id;
private string $client_secret;
private string $url;
private string $callback_url;
private string $verify_token;
public function __construct()
{
$this->client_id = config('ct_strava.client_id');
$this->client_secret = config('ct_strava.client_secret');
$this->url = config('services.strava.push_subscriptions_url');
$this->callback_url = config('services.strava.webhook_callback_url');
$this->verify_token = config('services.strava.webhook_verify_token');
}
public function subscribe(): int|null
{
//
}
public function unsubscribe(): bool
{
//
}
public function view(): int|null
{
//
}
public function validate(string $mode, string $token, string $challenge): Response|JsonResponse
{
//
}
}
Note If I had done this "by the book" (protip: there is no such thing in programming, don't let anyone tell you that), I might have implemented a more generic interface, say WebhookServiceInterface
. Since I only care about a single webhook service for the foreseeable future - and since my app is all about Strava - I don't need the additional complexity at this point.
The constructor is pretty straightforward; it simply assigns a bunch of configuration options.
Config keys prefixed with ct_strava
come from config/ct_strava.php
which is the config for laravel-strava. Read the docs to see how it works. This package is pretty much a requirement for my implementation of Strava webhooks.
Keys prefixed with services.strava.
are my own webhook-specific configuration. Here's how to configure them:
config/services.php
- webhook service config
return [
// ...
'strava' => [
'push_subscriptions_url' => env('STRAVA_PUSH_SUBSCRIPTIONS_URL'),
'webhook_callback_url' => env('STRAVA_WEBHOOK_CALLBACK_URL'),
'webhook_verify_token' => env('STRAVA_WEBHOOK_VERIFY_TOKEN'),
],
];
.env
- application config
#...
STRAVA_PUSH_SUBSCRIPTIONS_URL="https://www.strava.com/api/v3/push_subscriptions"
# The app webhook callback URL that Strava uses to fire GET/POST events
STRAVA_WEBHOOK_CALLBACK_URL="https://www.yoursite.com/webhook"
# A random string generated by my app that Strava uses to verify the request
STRAVA_WEBHOOK_VERIFY_TOKEN=ABC123defXYZ4567
A word about the verify token. This is a string (preferably random) that you will have to generate somehow. There are many ways to do it. I chose to do it the lazy way, by running Str::random();
once, and dumping the result in .env
. It's probably worth regenerating this token every once in a while in case it gets leaked on Strava's side somehow. If your .env
file gets hacked, you have bigger problems to worry about. I'm fine with the way it works now.
Now that the service skeleton is ready, let's take a detour and bind it to the Laravel app container as a singleton. Why a singleton? Because we want this to be instantiated only once throughout the app's lifecycle. Quoting from the Strava webhook docs "Each application may only have one subscription".
To bind the service, open app/Providers/AppServiceProvider.php
and add the following:
use App\Services\StravaWebhookService;
class AppServiceProvider extends ServiceProvider
{
public function register()
{
$this->app->singleton(StravaWebhookService::class, function ($app) {
return new StravaWebhookService();
});
}
// ...
}
This allows us to call the methods on the singleton from anywhere on the app, without needing to instantiate it. For example:
$id = app(StravaWebhookService::class)->subscribe();
subscribe()
methodI'll preface this by saying that I don't like to return mixed types. Here I'm returning a subscription id as an integer, or null for any other reason. Ideally I would return an object with additional information, such as response status and errors. For what I need, this does the job.
To subscribe to Strava's webhooks, my app needs to POST
the following data. If successful, Strava will respond with a JSON string in the form of "{"id": 1234}"
. You can use this id to store it in the DB if you want to keep track of it, but I'm not doing that.
public function subscribe(): int|null
{
$response = Http::post($this->url, [
'client_id' => $this->client_id,
'client_secret' => $this->client_secret,
'callback_url' => $this->callback_url,
'verify_token' => $this->verify_token,
]);
if ($response->status() === Response::HTTP_CREATED) {
return json_decode($response->body())->id;
}
return null;
}
validate()
methodThis is translated to Laravel/PHP from the Node example in the docs webhook example.
It runs when Strava makes a GET
request to my app's webhook callback. For example: GET https://www.yoursite.com/webhook?hub.verify_token=RANDOMSTRING&hub.challenge=15f7d1a91c1f40f8a748fd134752feb3&hub.mode=subscribe
.
If successful, this method responds with a JSON object containing the Strava challenge string. For example: {"hub.challenge": "15f7d1a91c1f40f8a748fd134752feb3"}
. If it fails for any reason, it returns 403 Forbidden
.
public function validate(string $mode, string $token, string $challenge): Response|JsonResponse
{
// Checks if a token and mode is in the query string of the request
if ($mode && $token) {
// Verifies that the mode and token sent are valid
if ($mode === 'subscribe' && $token === $this->verify_token) {
// Responds with the challenge token from the request
return response()->json(['hub.challenge' => $challenge]);
} else {
// Responds with '403 Forbidden' if verify tokens do not match
return response('', Response::HTTP_FORBIDDEN);
}
}
return response('', Response::HTTP_FORBIDDEN);
}
view()
methodThis utility method is useful to check the status of a subscription. Once again, it returns mixed types.
If there's a valid subscription, it returns an integer id. Otherwise, it returns null.
As with the subscribe()
method, this could be improved by returning an object with more context. It's good enough for what I need.
public function view(): int|null
{
$response = Http::get($this->url, [
'client_id' => $this->client_id,
'client_secret' => $this->client_secret,
]);
if ($response->status() === Response::HTTP_OK) {
$body = json_decode($response->body());
if ($body) {
return $body[0]->id; // each application can have only 1 subscription
} else {
return null; // no subscription found
}
}
return null;
}
unsubscribe()
methodUnsubscribing from the Strava webhooks will delete the subscription, and disconnect the app from receiving additional push notifications.
It has 2 parts:
view()
method on the singleton to check if there's an active subscription. This will return the subscription id, with the advantage that we don't have to store it on the app side.DELETE
request with the id in the URL.This method has only 2 possible outcomes: either the entire operation succeeded, or it failed.
public function unsubscribe(): bool
{
$id = app(StravaWebhookService::class)->view(); // use the singleton
if (!$id) {
return false;
}
$response = Http::delete("$this->url/$id", [
'client_id' => $this->client_id,
'client_secret' => $this->client_secret,
]);
if ($response->status() === Response::HTTP_NO_CONTENT) {
return true;
}
return false;
}
GET /webhook
callback
This is the callback used by Strava to validate the subscription request. As explained earlier, Strava will send a GET https://www.yoursite.com/webhook?hub.verify_token=RANDOMSTRING&hub.challenge=15f7d1a91c1f40f8a748fd134752feb3&hub.mode=subscribe
request that my app needs to handle.
All the work can be done directly in the route callback method by returning the singleton validate
method with the request parameters.
Note Query string parameters in the form of hub.verify_token=RANDOMSTRING
are parsed by Laravel like this $request->query('hub_verify_token')
(underscore replaces period).
Route::get('/webhook', function (Request $request) {
$mode = $request->query('hub_mode'); // hub.mode
$token = $request->query('hub_verify_token'); // hub.verify_token
$challenge = $request->query('hub_challenge'); // hub.challenge
return app(StravaWebhookService::class)->validate($mode, $token, $challenge);
});
POST /webhook
callback
Whenever there's an event, Strava will fire a POST /webhook
request with the following data.
The app needs respond with 200 OK
within 2s, otherwise Strava will retry 3 more times before it stops.
Route::post('/webhook', function (Request $request) {
$aspect_type = $request['aspect_type']; // "create" | "update" | "delete"
$event_time = $request['event_time']; // time the event occurred
$object_id = $request['object_id']; // activity ID | athlete ID
$object_type = $request['object_type']; // "activity" | "athlete"
$owner_id = $request['owner_id']; // athlete ID
$subscription_id = $request['subscription_id']; // push subscription ID receiving the event
$updates = $request['updates']; // activity update: {"title" | "type" | "private": true/false} ; app deauthorization: {"authorized": false}
Log::channel('strava')->info(json_encode($request->all()));
return response('EVENT_RECEIVED', Response::HTTP_OK);
})->withoutMiddleware(VerifyCsrfToken::class);
Notice that at this point I'm simply logging the payload, so that I can parse it later and figure out how to handle it.
Also notice the quick'n'lazy way of disabling CSRF token checking. By default, Laravel will require a CSRF token from any POST request, which is not needed in this situation. I could have created a separate middleware for webhooks, but it's a lot quicker to do it inline for the single endpoint.
To make it easy to subscribe/view/unsubscribe, I created 3 very basic artisan commands as follows:
app/Console/Commands/SubscribeToStravaWebhookCommand.php
Run as: php artisan strava:subscribe
<?php
namespace App\Console\Commands;
use App\Services\StravaWebhookService;
use Illuminate\Console\Command;
class SubscribeToStravaWebhookCommand extends Command
{
protected $signature = 'strava:subscribe';
protected $description = 'Subscribes to a Strava webhook';
public function __construct()
{
parent::__construct();
}
public function handle()
{
$id = app(StravaWebhookService::class)->subscribe();
if ($id) {
$this->info("Successfully subscribed ID: {$id}");
} else {
$this->warn('Unable to subscribe');
}
return 0;
}
}
app/Console/Commands/ViewStravaWebhookCommand.php
Run as: php artisan strava:view-subscription
<?php
namespace App\Console\Commands;
use App\Services\StravaWebhookService;
use Illuminate\Console\Command;
class ViewStravaWebhookCommand extends Command
{
protected $signature = 'strava:view-subscription';
protected $description = 'Views a Strava webhook subscription';
public function __construct()
{
parent::__construct();
}
public function handle()
{
$id = app(StravaWebhookService::class)->view();
if ($id) {
$this->info("Subscription ID: $id");
} else {
$this->warn('Error or no subscription found');
}
return 0;
}
}
app/Console/Commands/UnsubscribeStravaWebhookCommand.php
Run as: php artisan strava:unsubscribe
<?php
namespace App\Console\Commands;
use App\Services\StravaWebhookService;
use Illuminate\Console\Command;
class UnsubscribeStravaWebhookCommand extends Command
{
protected $signature = 'strava:unsubscribe';
protected $description = 'Deletes a Strava webhook subscription';
public function __construct()
{
parent::__construct();
}
public function handle()
{
if (app(StravaWebhookService::class)->unsubscribe()) {
$this->info("Successfully unsubscribed");
} else {
$this->warn('Error or no subscription found');
}
return 0;
}
}
You can find all the pieces assembled in this Gist. If you look closely, you'll notice some additional logging in the service methods, in lieu of returning explicit errors.
The goal of this project was to complete the Strava webhook subscription flow, without overengineering it. I have achieved that successfully, with a little extra polish that I hadn't planned on initially. This code is now running in production, with an active webhook subscription that accepts Strava events, and logs them for analysis.
To actually make use of the event data I'm collecting, I need to do more work. Here are some of the things that I would like to eventually accomplish using the foundation I've established.
I consider the last point to be quite important for the Laravel-Strava dev community at large. It is a long term goal of mine to eventually have all the Laravel-Strava API+webhook functionality in one package. There are a few caveats, though.
For one thing, the author of the package is not very active - understandably so! It is a huge responsibility to maintain a popular package, and it can become a burden if you're not actively vested in. If I were to get this functionality merged in, it would become my responsibility, at least partly.
For another, I'm having constant doubts about the API for this kind of functionality. What works for me may not work for others, and I'm not keen on endless debates or covering every possible scenario/use-case.
I have the option of building it into my own fork breadthe/laravel-strava, so keep an eye on that in case you're interested.
Hopefully this was a helpful explanation of how I implemented Strava webhooks in my Laravel app. Feel free to use the code in any way you see fit, and hit me up on Twitter for ideas or suggestions.
]]>Recently the local PHP 8 + Composer 2 + Laravel Valet dev environment on my M1 Mac Air got trashed for no discernable reason. This might well be an obscure problem that no one else but me encounters, but I'm documenting it nonetheless.
The rough chain of events happened as follows (I don't remember the exact details):
php
or composer
command resulted in the process being automatically killed:php -v
[1] 27499 killed php -v
composer -v
[1] 27590 killed composer -v
Check program location:
which php
/opt/homebrew/bin/php
which composer
/opt/homebrew/bin/composer
This hinted it might be a Homebrew issue.
First I uninstalled PHP (8) with brew uninstall --force php
:
php -v
WARNING: PHP is not recommended
PHP is included in macOS for compatibility with legacy software.
Future versions of macOS will not include PHP.
PHP 7.3.24-(to be removed in future macOS) (cli) (built: May 8 2021 09:40:34) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.3.24, Copyright (c) 1998-2018 Zend Technologies
which php
/usr/bin/php
Note uninstalling PHP 8 might not be strictly necessary, since it will probably get overwritten later down the line when I reinstall, but might as well for good measure.
Next, uninstall Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall.sh)"
Finally, install the whole Laravel Valet environment, including a new version of Homebrew following this helpful M1-specific guide exactly.
The key is to use the arm
Rosetta prefix (as specified in the article), which is something I might have omitted when I first set up this environment many months ago.
Check program locations again:
which php
/usr/local/bin/php
which composer
/usr/local/bin/composer
Locations look good, and the environment should be working at this point.
One more issue, turns out yarn also got messed up somehow. Yeah, I could switch to npm
for my Laravel projects but I don't feel like adding a package.lock
file, so I prefer to stick with yarn.
Unfortunately, attempting to install yarn produced another obscure error that took a while to research:
npm install -g yarn
npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /usr/local/lib/node_modules/yarn
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, mkdir '/usr/local/lib/node_modules/yarn'
npm ERR! [Error: EACCES: permission denied, mkdir '/usr/local/lib/node_modules/yarn'] {
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'mkdir',
npm ERR! path: '/usr/local/lib/node_modules/yarn'
npm ERR! }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.
The solution involved changing ownership on a couple folders, and this allowed the installation to proceed.
sudo chown -R $USER /usr/lib/node_modules
sudo chown -R $USER /usr/local/lib/node_modules
npm install -g yarn
yarn -v
1.22.10
Post-mortem analysis of this leaves me puzzled. I have no idea why the Homebrew versions of PHP and Composer got messed up like that. Perhaps something to do with certain processes that were running at the time getting corrupted when the system was forcefully shut down due to the low power state.
As much as I love the Mac, there's one thing about Mac laptops that I hate, and that is the power management with the lid open/closed.
Expected behavior when using Shut Down:
Instead, what I get - regardless if I use Sleep or Shut Down - is this:
There doesn't seem to be any setting that I can tweak to make it work the way I want. Both x86 and ARM versions behave similarly. If you have suggestions, let me know on Twitter.
]]>My personal preference is to keep dependencies at a minimum, for several reasons, but that's a rant for another time.
Here are a few ways in which 3rd party packages can cause headaches:
When a dependency "breaks" for me, I typically do the following:
This will get me unstuck for the time being, but the ideal outcome is for my contribution to be accepted in the official package.
In PHP we use composer
to manage packages (install, remove, update, etc). Dependencies are defined in the composer.json
file.
To install a new package, we run the following command (actual example from one of my apps):
composer require codetoad/strava
This creates an entry in composer.json
like so:
...
"require": {
"php": "^8.0",
"ext-json": "*",
"codetoad/strava": "^1.0",
...
},
...
Note that the official repository for this package is located at https://github.com/RichieMcMullen/laravel-strava.
Continuing on the previous example, I've been using the Laravel Strava package in one of my apps, and so far it's been great at pulling data from Strava's API. However, it does not have the ability to modify data.
I decided I wanted to update several parameters belonging to an Activity, such as name, description, and gear. Since I couldn't simply ask the author to implement this for me, I knew I had to do it myself.
The first step is to fork the package. In GitHub, simply click the Fork button in the top right corner, and it will create a copy of the repository under your own organization. For me, that would be https://github.com/breadthe/laravel-strava.
Now I want to work on the fork locally, so I simply clone it with git clone https://github.com/breadthe/laravel-strava
, then implement the features I need.
One of the cool things Composer lets you do is to link a package to a local directory. This can be done with a relative or absolute link.
I cloned the forked package in the following directory (Mac): /Users/myuser/code/packages/php/laravel-strava
.
The app that requires the package is in /Users/myuser/code/laravel/example-app
. Let's switch to it now, in the terminal. Then I'm going to remove the codetoad/strava
package using Composer.
cd /Users/myuser/code/laravel/example-app
composer remove codetoad/strava
Open up composer.json
and add the following top-level key:
"repositories": [
{
"type": "path",
"url": "/Users/myuser/code/packages/php/laravel-strava"
}
],
This tells Composer to install a package called laravel-strava
from the local path. Alternatively, you can use a relative path such as ../../packages/php/laravel-strava
.
Finally install the package with a @dev
constraint:
composer require codetoad/strava @dev
You should see this in composer.json
:
"require": {
"php": "^8.0",
"ext-json": "*",
"codetoad/strava": "@dev",
...
Now any changes made to the local fork will appear in the app.
Once local development on the fork is complete, I can push my changes to the remote fork. From there, I can create a pull request to the original package, and hope the changes will be accepted.
Meanwhile, there's no need to delay development, since I can simply link to my own fork.
To do this, once again remove the package:
composer remove codetoad/strava
Then modify the repositories
key in composer.json
like so:
"repositories": [
{
"type": "vcs",
"url": "https://github.com/breadthe/laravel-strava"
}
],
This tells Composer to install a package called laravel-strava
from the Version Control System location specified in the URL (instead of the default repo for codetoad/strava
).
Finally, install the package once more, with a dev-master
constraint:
composer require codetoad/strava dev-master
You should see this in composer.json
:
"require": {
"php": "^8.0",
"ext-json": "*",
"codetoad/strava": "dev-master",
...
And that's it. The app can now be deployed to a production server, and running composer install
will install the forked package.
At the time of writing this, my PR hasn't yet been merged, but here's what I would do next:
composer remove codetoad/strava
repositories
key from composer.json
vendor/
foldercomposer require codetoad/strava
This is simpler than it sounds, but I had to document it for my own benefit. I don't deal with forks often enough to have it committed to muscle memory, so writing it down will definitely help when I need it again. I hope it will be useful to you as well, and if I missed anything or you find any mistakes, let me know on Twitter.
]]>Electron apps built with Svelte are a common scenario for this. Since I haven't found any opinions on how one might handle navigation between different app sections (or pages), I created my own pattern. I call it the Dynamic Svelte Component Router Pattern. It sounds pretentious, and I'm sure others are using the exact same thing, but here it is nonetheless.
My interpretation of this assumes there won't be any URL or query parameters passed to the "route". There is no need for it - in the Electron apps (or any SPA) I build, I prefer to pass variables using props, events, or state.
The basic premise behind this pattern is Svelte's built-in dynamic components using this syntax:
<svelte:component this={expression}/>
To keep things simple, let's assume my app has 2 sections: a Dashboard, and a Settings page. There's also a menu with links to each. I will keep the current page in a very simple store.
TL;DR You can see the final example in the Svelte REPL. Keep reading for more details.
App.svelte
- The "wrapper" which handles the dynamic component rendering.It imports the page store, the menu component, and the two page components.
Next, it defines an array of pages. I use this to get a reference to the component that should be loaded dynamically. This part feels a bit messy, and I have a feeling there might be a better way to handle this, but this is what I came up with.
Finally, the dynamic component matches the store value ($page
) with an id in the pages
array, and then returns the component
property which then gets loaded.
The issue here is that you can't just do const pages = ['Dashboard', 'Settings'];
, and then pages.find((p) => p === $page)
, because it will try to pass a string instead of the actual component object to this={expression}
. In other words, it will do this <svelte:component this={"Dashboard"}/>
instead of this <svelte:component this={Dashboard}/>
, and that will throw an error.
Thus, the workaround is to use the array I just mentioned. It may not be pretty but it does the job.
<script>
import { page } from "./store";
import Menu from "./Menu.svelte";
import Dashboard from "./Dashboard.svelte";
import Settings from "./Settings.svelte";
const pages = [
{ id: "Dashboard", component: Dashboard },
{ id: "Settings", component: Settings },
];
</script>
<main>
<Menu />
<svelte:component this={pages.find((p) => p.id === $page).component} />
</main>
Menu.svelte
- Renders the menu links and saves the selected page to the store.Clicking a link saves a string value of the desired page to the store.
<script>
import { page } from "./store";
</script>
<nav>
<ul>
<li>
<a href="/" on:click|preventDefault={() => $page = 'Dashboard'}>Dashboard</a>
</li>
<li>
<a href="/" on:click|preventDefault={() => $page = 'Settings'}>Settings</a>
</li>
<li>
<a href="/" on:click|preventDefault={() => $page = 'Foo'}>Foo</a>
</li>
</ul>
</nav>
<style>
ul { margin:0; padding: 1rem; background-color: cornsilk; }
li { display: inline; padding: 1rem; }
</style>
Dashboard.svelte
+ Settings.svelte
- The two pages that will ultimately hold whatever you need them to.
store.js
- Stores the current page.
The simplest possible writable store is initialized with the Dashboard as default.
import { writable } from "svelte/store";
export const page = writable('Dashboard');
And that's it! Clicking the links loads the appropriate component.
In my apps so far, all the pages/sections have been static and well defined, so I didn't bother checking if the clicked page actually exists.
<svelte:component this={expression}/>
will simple fail to render if expression
is falsy. In this example, the result of pages.find((p) => p.id === $page).component
is undefined, not false. So clicking a page that is not defined in the pages
array (such as Foo) will throw an ugly error to the console and block the app.
To handle this more gracefully, I made some changes.
First, I wrapped the component finder in a try/catch, returning false if it's not found.
const getComponent = function () {
try {
return pages.find((p) => p.id === $page).component;
} catch (e) {
return false;
}
}
Then the dynamic component tag becomes:
<svelte:component this={getComponent()} />
Now, clicking the invalid Foo link will render empty content, but won't break the page anymore, so we can continue navigating to the other pages.
This last step is probably not needed, unless you are generating component names dynamically based on user input.
As a further enhancement, instead of returning false, it's trivial to return a custom 404 page.
So I created an error component called 404.svelte
, cloned from Dashboard.svelte
. Here's how the final App.svelte
looks, after importing the error component:
<script>
import { page } from "./store";
import Menu from "./Menu.svelte";
import Dashboard from "./Dashboard.svelte";
import Settings from "./Settings.svelte";
import NotFound from "./404.svelte";
const pages = [
{ id: "Dashboard", component: Dashboard },
{ id: "Settings", component: Settings },
];
const getComponent = function () {
try {
return pages.find((p) => p.id === $page).component;
} catch (e) {
return NotFound;
}
}
</script>
<main>
<Menu />
<svelte:component this={getComponent()} />
</main>
And that's all there is to it!
]]>SvelteKit is currently in public beta, but it's caused a lot of chatter over the interwebs, and that made me give it a spin.
Here's a super simple setup to scaffold a SvelteKit static site. Since I also ❤️ TailwindCSS, I have instructions for that as well. And to make it a complete package, it all runs on Vite, for a super fast ⚡️ development environment.
Check out my newer Svelte, Vite, TailwindCSS 3 article if all you need is the base Svelte framework.
# 1. Create a new Svelte Kit site
# My choices: no TypeScript, ESLint, Prettier
npm init svelte@next my-app
cd my-app
# 2. Install packages
npm install
# 3. Add TailwindCSS
# @see https://github.com/svelte-add/tailwindcss
npx svelte-add@latest tailwindcss
Step 3 automates most of Tailwind's configuration, by creating pre-populated configs for postcss.config.cjs
, tailwind.config.cjs
, and filling in the required PostCSS config in svelte.config.cjs
.
To finalize installing Tailwind, open app.css
and add the base Tailwind styles right at the top:
@tailwind base;
// Custom CSS goes here
@tailwind components;
@tailwind utilities;
...
Finally start it in dev mode, and open it in the browser.
# Run in development mode, and open the site in the browser
npm run dev -- --open
]]>exe
file on Windows is safer when it comes with a PGP signature that you can verify.
How do you check a PGP signature though? I've always avoided it until now, when I needed to make double-certain that a certain installer was the real deal.
Unfortunately Windows 10 does not offer any tools out of the box, instead requiring installation of the Windows 10 SDK.
After downloading and installing from the link above, the SignTool utility should become available. This is what you'll be using to verify PGP signatures.
Let's assume you downloaded a file called installer.exe
from whatever website. If the website provided a PGP signature, it will likely be named installer.exe.asc
, so download it in the same folder as the .exe
.
First, locate the exact path of signtool.exe
. Mine ended up in a weirdly-named folder: C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64
. You may have multiple folders named similar to 10.0.17763.0
. Browse all of them until you find the tool.
Next, open Command Prompt and navigate to the directory where the file (.exe
) and its signature (.exe.asc
) reside.
Then run this command:
# simplified - when you have signtool in your path
signtool verify /pa installer.exe
# full
C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64\signtool.exe verify /pa installer.exe
If the verification succeeded, you will see this message:
Successfully verified: <path>\installer.exe
This guide is based on these instructions, and adapted for my own use-case.
]]>I've been using Svelte for my Electron apps, because it's a joy to work on, so this guide is focused on Svelte, but could be adapted for other frameworks.
I'll use a simple (and very common) example: storing the user's theme (dark/light) preference, and I'll show a few ways in which this can be accomplished using Svelte's store.
The end goal is to be able to style elements depending on how the theme changes. A button toggles the theme. Here's a snippet, using TailwindCSS to apply classes:
import { dark } from "./store";
<header
class="flex"
class:bg-gray-100={!$dark}
class:bg-gray-900={$dark}
class:text-gray-900={!$dark}
class:text-gray-100={$dark}
>
</header>
<button
on:click={() => ($dark = !$dark)}
>
Switch to { $dark ? 'light' : 'dark' }
</button>
When the theme is dark
, the background and text are bg-gray-900
and text-gray-100
, respectively. Basically light text on dark background. When the theme is not dark
(i.e. light), the opposite is applied.
Let's implement a store in Svelte. Stores are used to persist application state.
The most basic Svelte store will persist a setting only while the app is running, as long as it wasn't refreshed. Here's how it looks:
// store.js
import { writable } from "svelte/store";
export const dark = writable(false);
The default in this case is the light theme (not dark
). The obvious problem is that our preference goes away if we close or refresh the app.
A simple and native way to permanently store settings is using the browser's localStorage
API. It's super simple, the only downside being that you can only store key-value pairs, meaning you can't have complex objects. Fortunately you can use JSON.stringify()
when setting a key, and JSON.parse()
when reading it. For this simple example, this won't be needed, as we're just storing the boolean value.
// store.js
import { writable } from "svelte/store";
const storedDark = localStorage.getItem("currentFolder") || false;
export const dark = writable(storedDark);
dark.subscribe(value => {
localStorage.setItem('dark', value);
});
The code gets slightly longer, but it does its job of persisting our preferences between app sessions.
The next step up is to store the settings on disk. In general this is perceived as being more robust than localStorage
(which can lose data in rare cases when there's an error), and you can also retrieve and/or backup the settings file if you need to.
We'll need a 3rd party library, and there are many to choose from. I used electron-settings. This library offers a more powerful API, with the ability to store deeply nested objects, and retrieve specific keys using dot notation (setting1.setting2.setting3
).
Find the full documentation here.
Install it:
npm install electron-settings
If using Electron 10+, and you need to use electron-settings
in the browser window, configure Electron like so:
new BrowserWindow({
webPreferences: {
enableRemoteModule: true // <-- Add me
}
});
Implementing the store is easy. It looks just like the localStorage
store, except we're swapping out some functions. I prefer to use the *Sync
functions here, but there are async equivalents.
// store.js
import { writable } from "svelte/store";
const settings = require('electron-settings');
const storedDark = settings.getSync('dark') || false;
export const dark = writable(storedDark);
dark.subscribe(value => {
settings.setSync('dark', value);
});
By default electron-settings
will save the settings file under the userData
folder. On Mac, this would be under /Users/YourUser/Library/Application Support/YourApp/settings.json
.
That's it! I showed you 3 ways to store settings in a Svelte + Electron app, with two of them permanent.
]]>The goal of this exercise is to deploy a simple Todo app from GitHub to Netlify.
First, create a free account with Netlify. When you log in you should see your dashboard which shows your sites:
I have 4 sites currently (including this blog), all deployed from GitHub. Except for the blog, which uses a custom domain, the 3 remaining sites use Netlify's netlify.app
domain. For example Craftnautica.
A Svelte + Rollup static site typically serves the static portion from a public/
folder, and generates a production bundle (minified CSS + JS) in a public/build/
folder. It can certainly be configured otherwise, and different frameworks will serve their bundles from dist/
or out/
or similar. This guide is about Svelte because this is what I'm using, but it can just as well apply to any framework that generates a static build.
Click New site from Git, then select GitHub. You can use the same process for GitLab or Bitbucket.
Next you'll be presented with a list of repos that you can deploy, however there's a good chance the repo you want is not in the list.
Using the search to filter out the repos will show zero results.
The reason you're not seeing the repo is that it's not yet visible to Netlify. Basically Netlify can't just reach into your GitHub and grab any repo it wants - it requires explicit permission first. The idea is to prevent someone other than the owner from deploying, even if it's a public repo.
So let's give Netlify access to the breadthe/svelte-todo
repo by clicking the Configure Netlify on GitHub button, or the Configure the Netlify app on GitHub link.
You'll be presented with another window asking which GitHub user to connect. Select Configure. You will need to provide your GitHub credentials next.
After you've logged in, scroll down until you reach Repository access. Making sure Only select repositories is selected, filter the list to find your repo. In my case, that would be svelte-todo
.
Select it and hit Save. It will take you back to Netlify and the repo will now appear in the list that Netlify can see.
Once you've selected the repo, you'll get the deployment settings screen.
Assuming you want to deploy the master
branch of your repo, as in my case, keep the defaults.
Pay attention to the Build command (the npm
/ yarn
command that generates the production build, defined in package.json
). Here, it is yarn build
, which is what I want, so I'll keep it.
The Publish directory default is dist/
, however, and this does not match my Svelte project structure. What I want here is public/
instead, so I'll change that.
Finally, click Deploy site.
Netlify will take a few minutes to deploy the site. Notice the agitated-volhard-e0ff6a identifier. This is the random sub-domain assigned to the new site. Once deployed, the site can be accessed from agitated-volhard-e0ff6a.netlify.app. That's ugly, of course, so unless it's a throwaway site, I'll show you how to change it in the next step.
Now that the deployment is complete, we can change the domain name by clicking Set up a custom domain
While you can absolutely use your own domain (for free) if you wish, for simple side projects I like to go with Netlify's domain, and choose a custom sub-domain.
To change the random subdomain from agitated-volhard-e0ff6a.netlify.app to something more palatable, go to Site settings > Domain management > Custom domains > Options > Edit site name.
Well, turns out "svelte-todo" wasn't available, so I picked "svelte-todo3". Now I can serve my site from svelte-todo3.netlify.app, but since I created it just for this guide, I'll delete it, and it won't be accessible anymore, freeing the svelte-todo3 subdomain for reuse.
Netlify has a variety of deployment options under Site settings > Continuous Deployment, but depending on the size of your team and project, a complex workflow might not be needed.
By default, Netlify will deploy any changes that were pushed to the master
(older) or main
(more recently) branch. If you're like me, hosting simple static sites in a team of one, this setup is just right.
It's fairly straightforward to deploy static content to Netlify. Svelte apps require a single tweak to get Netlify to serve them correctly. If you need more control over deployments or domains, it's there too.
]]>As an indie maker with zero revenue from my side projects, it wasn't making financial sense to pay for Forge. The biggest value Forge brought me was the initial provisioning of the instance. Afterwards, I continued to deploy code manually (how hard can git pull
be?), and do my own basic server maintenance. I've been doing this successfully for the better part of the last 2 years.
This guide is about upgrading from PHP 7.4 to 8.0 on Ubuntu 18.04. As of this writing, I have yet to upgrade to Ubuntu 20.04, but it will happen soon(ish).
The instructions are partly based on this excellent article, but I've added many specifics, details, and pitfalls I encountered in my own situation. It wasn't exactly smooth sailing, as you'll see.
SSH into the instance, then...
# Check PHP version
php -v
PHP 7.4.15 (cli) (built: Feb 23 2021 15:12:05) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
with Zend OPcache v7.4.15, Copyright (c), by Zend Technologies
# List all PHP packages
dpkg -l | grep php | tee packages.txt
# List
ii php-common 2:81+ubuntu18.04.1+deb.sury.org+1 all Common files for PHP packages
ii php-igbinary 3.2.1+2.0.8-6+ubuntu18.04.1+deb.sury.org+1 amd64 igbinary PHP serializer
ii php-memcached 3.1.5+2.2.0-9+ubuntu18.04.1+deb.sury.org+1 amd64 memcached extension module for PHP, uses libmemcached
ii php-msgpack 2.1.2+0.5.7-6+ubuntu18.04.1+deb.sury.org+1 amd64 PHP extension for interfacing with MessagePack
ii php-pear 1:1.10.12+submodules+notgz+20210212-1+ubuntu18.04.1+deb.sury.org+1 all PEAR Base System
ii php7.4-bcmath 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 Bcmath module for PHP
ii php7.4-cli 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 command-line interpreter for the PHP scripting language
ii php7.4-common 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 documentation, examples and common module for PHP
ii php7.4-curl 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 CURL module for PHP
ii php7.4-dev 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 Files for PHP7.4 module development
ii php7.4-fpm 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 server-side, HTML-embedded scripting language (FPM-CGI binary)
ii php7.4-gd 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 GD module for PHP
ii php7.4-igbinary 3.2.1+2.0.8-6+ubuntu18.04.1+deb.sury.org+1 amd64 igbinary PHP serializer
ii php7.4-imap 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 IMAP module for PHP
ii php7.4-intl 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 Internationalisation module for PHP
ii php7.4-json 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 JSON module for PHP
ii php7.4-mbstring 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 MBSTRING module for PHP
rc php7.4-memcached 3.1.5+2.2.0-9+ubuntu18.04.1+deb.sury.org+1 amd64 memcached extension module for PHP, uses libmemcached
ii php7.4-msgpack 2.1.2+0.5.7-6+ubuntu18.04.1+deb.sury.org+1 amd64 PHP extension for interfacing with MessagePack
ii php7.4-mysql 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 MySQL module for PHP
ii php7.4-opcache 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 Zend OpCache module for PHP
ii php7.4-pgsql 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 PostgreSQL module for PHP
ii php7.4-readline 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 readline module for PHP
ii php7.4-soap 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 SOAP module for PHP
ii php7.4-sqlite3 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 SQLite3 module for PHP
ii php7.4-xml 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 DOM, SimpleXML, XML, and XSL module for PHP
ii php7.4-zip 7.4.15-7+ubuntu18.04.1+deb.sury.org+1 amd64 Zip module for PHP
ii php8.0-common 8.0.2-7+ubuntu18.04.1+deb.sury.org+1 amd64 documentation, examples and common module for PHP
ii php8.0-igbinary 3.2.1+2.0.8-6+ubuntu18.04.1+deb.sury.org+1 amd64 igbinary PHP serializer
ii php8.0-memcached 3.1.5+2.2.0-9+ubuntu18.04.1+deb.sury.org+1 amd64 memcached extension module for PHP, uses libmemcached
ii php8.0-msgpack 2.1.2+0.5.7-6+ubuntu18.04.1+deb.sury.org+1 amd64 PHP extension for interfacing with MessagePack
ii pkg-php-tools 1.35ubuntu1 all various packaging tools and scripts for PHP packages
# Add ondrej/php PPA
sudo add-apt-repository ppa:ondrej/php # Press enter when prompted.
# CAVEATS:
# 1. If you are using php-gearman, you need to add ppa:ondrej/pkg-gearman
# 2. If you are using apache2, you are advised to add ppa:ondrej/apache2
# 3. If you are using nginx, you are advised to add ppa:ondrej/nginx-mainline or ppa:ondrej/nginx
sudo add-apt-repository ppa:ondrej/nginx
# Remove a repository (not needed here, listed for reference)
# sudo add-apt-repository --remove ppa:ondrej/nginx
# Check if PPA was added - should see a number of entries
grep ^ /etc/apt/sources.list /etc/apt/sources.list.d/* | grep ondrej/php
grep ^ /etc/apt/sources.list /etc/apt/sources.list.d/* | grep ondrej/nginx
sudo apt-get update
sudo apt install php8.0-common php8.0-cli -y
# Check version
php -v
PHP 8.0.2 (cli) (built: Feb 23 2021 15:13:59) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.2, Copyright (c) Zend Technologies
with Zend OPcache v8.0.2, Copyright (c), by Zend Technologies
# Check modules (each will appear on its own line, listed on a single line here for brevity)
php -m
[PHP Modules] calendar Core ctype date exif FFI fileinfo filter ftp gettext hash iconv igbinary json libxml memcached msgpack openssl pcntl pcre PDO Phar posix readline Reflection session shmop sockets sodium SPL standard sysvmsg sysvsem sysvshm tokenizer Zend OPcache zlib
[Zend Modules]
Zend OPcache
# Install additional extensions that were present in 7.4
# Note: no need to install php8.0-json; it's already provided by other packages
sudo apt install php8.0-{bcmath,curl,dev,fpm,gd,igbinary,imap,intl,mbstring,memcached,msgpack,mysql,opcache,pgsql,readline,soap,sqlite3,xml,zip}
Since I won't be using old versions of PHP (< 7.4), now is a good time to remove them.
# Purge old packages, in my case from PHP 5.6 thru 7.3
sudo apt purge '^php5.6.*'
sudo apt purge '^php7.0.*'
sudo apt purge '^php7.1.*'
sudo apt purge '^php7.2.*'
sudo apt purge '^php7.3.*'
This is for Nginx servers only. Sorry, I can't help you with Apache.
Repeat the procedure for each site. There's a way to automate it, probably with sed
but I don't do this often enough to warrant the
sudo vi /etc/nginx/sites-enabled/example.com
# Look for the following block
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; # change to php8.0-fpm.sock or even better to php8.0-fpm.sock
fastcgi_index index.php;
include fastcgi_params;
}
# Test Nginx config
sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# Restart PHP FPM & Nginx
sudo service php8.0-fpm restart
sudo service nginx restart
Note The reason I recommend using the generic php-fpm.sock
instead of php8.0-fpm.sock
is because, on my server at least, php-fpm.sock
is actually aliased to php8.0-fpm.sock
. Run these commands to find out:
ls -al /var/run/php/php-fpm.sock # /var/run/php/php-fpm.sock -> /etc/alternatives/php-fpm.sock
ls -al /etc/alternatives/php-fpm.sock # /etc/alternatives/php-fpm.sock -> /run/php/php8.0-fpm.sock
I don't know the specifics of why it is so, but I assume it was done as part of the PHP 8 upgrade, which works really well for me as I don't have to worry about changing the Nginx config the next time I upgrade PHP.
Let's say that, after upgrading PHP to the shiny new 8.0, you want to run composer install --no-interaction --prefer-dist --optimize-autoloader
in your Laravel project, and see the following errors:
Deprecation Notice: Required parameter $path follows optional parameter $schema in phar:///usr/local/bin/composer/vendor/justinrainbow/json-schema/src/JsonSchema/Constraints/UndefinedConstraint.php:62
Deprecation Notice: Required parameter $path follows optional parameter $schema in phar:///usr/local/bin/composer/vendor/justinrainbow/json-schema/src/JsonSchema/Constraints/UndefinedConstraint.php:108
Deprecation Notice: Method ReflectionParameter::getClass() is deprecated in phar:///usr/local/bin/composer/src/Composer/Repository/RepositoryManager.php:130
Deprecation Notice: Method ReflectionParameter::getClass() is deprecated in phar:///usr/local/bin/composer/src/Composer/Repository/RepositoryManager.php:130
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
PHP Fatal error: Uncaught ArgumentCountError: array_merge() does not accept unknown named parameters in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/DefaultPolicy.php:84
Stack trace:
#0 [internal function]: array_merge()
#1 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/DefaultPolicy.php(84): call_user_func_array()
#2 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/Solver.php(387): Composer\DependencyResolver\DefaultPolicy->selectPreferredPackages()
#3 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/Solver.php(740): Composer\DependencyResolver\Solver->selectAndInstall()
#4 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/Solver.php(231): Composer\DependencyResolver\Solver->runSat()
#5 phar:///usr/local/bin/composer/src/Composer/Installer.php(489): Composer\DependencyResolver\Solver->solve()
#6 phar:///usr/local/bin/composer/src/Composer/Installer.php(232): Composer\Installer->doInstall()
#7 phar:///usr/local/bin/composer/src/Composer/Command/InstallCommand.php(122): Composer\Installer->run()
#8 phar:///usr/local/bin/composer/vendor/symfony/console/Command/Command.php(245): Composer\Command\InstallCommand->execute()
#9 phar:///usr/local/bin/composer/vendor/symfony/console/Application.php(835): Symfony\Component\Console\Command\Command->run()
#10 phar:///usr/local/bin/composer/vendor/symfony/console/Application.php(185): Symfony\Component\Console\Application->doRunCommand()
#11 phar:///usr/local/bin/composer/src/Composer/Console/Application.php(281): Symfony\Component\Console\Application->doRun()
#12 phar:///usr/local/bin/composer/vendor/symfony/console/Application.php(117): Composer\Console\Application->doRun()
#13 phar:///usr/local/bin/composer/src/Composer/Console/Application.php(113): Symfony\Component\Console\Application->run()
#14 phar:///usr/local/bin/composer/bin/composer(61): Composer\Console\Application->run()
#15 /usr/local/bin/composer(24): require('...')
#16 {main}
thrown in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/DefaultPolicy.php on line 84
Fatal error: Uncaught ArgumentCountError: array_merge() does not accept unknown named parameters in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/DefaultPolicy.php:84
Stack trace:
#0 [internal function]: array_merge()
#1 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/DefaultPolicy.php(84): call_user_func_array()
#2 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/Solver.php(387): Composer\DependencyResolver\DefaultPolicy->selectPreferredPackages()
#3 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/Solver.php(740): Composer\DependencyResolver\Solver->selectAndInstall()
#4 phar:///usr/local/bin/composer/src/Composer/DependencyResolver/Solver.php(231): Composer\DependencyResolver\Solver->runSat()
#5 phar:///usr/local/bin/composer/src/Composer/Installer.php(489): Composer\DependencyResolver\Solver->solve()
#6 phar:///usr/local/bin/composer/src/Composer/Installer.php(232): Composer\Installer->doInstall()
#7 phar:///usr/local/bin/composer/src/Composer/Command/InstallCommand.php(122): Composer\Installer->run()
#8 phar:///usr/local/bin/composer/vendor/symfony/console/Command/Command.php(245): Composer\Command\InstallCommand->execute()
#9 phar:///usr/local/bin/composer/vendor/symfony/console/Application.php(835): Symfony\Component\Console\Command\Command->run()
#10 phar:///usr/local/bin/composer/vendor/symfony/console/Application.php(185): Symfony\Component\Console\Application->doRunCommand()
#11 phar:///usr/local/bin/composer/src/Composer/Console/Application.php(281): Symfony\Component\Console\Application->doRun()
#12 phar:///usr/local/bin/composer/vendor/symfony/console/Application.php(117): Composer\Console\Application->doRun()
#13 phar:///usr/local/bin/composer/src/Composer/Console/Application.php(113): Symfony\Component\Console\Application->run()
#14 phar:///usr/local/bin/composer/bin/composer(61): Composer\Console\Application->run()
#15 /usr/local/bin/composer(24): require('...')
#16 {main}
thrown in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/DefaultPolicy.php on line 84
Oops, apparently I completely forgot that my server still runs Composer 1.0, while my local environment was upgraded to 2.0 a long time ago. Time to upgrade the server too.
This article partially covers the procedure, but here are my own steps:
# Check version
composer --version
Composer version 1.10.1 2020-03-13 20:34:27
# Upgrade Composer to 2.0
sudo composer self-update
# Downgrade Composer to 1.0 (in case you need it)
# sudo composer self-update --rollback # return to version 1.10.1
# Check version
composer --version
Composer version 2.0.11 2021-02-24 14:57:23
composer install
should once again work without issues.
Everything looks fine and dandy, right? Not so fast. There's always a final wrench in the proverbial gears. You may not encounter this specific issue, but it's absolutely worth documenting.
When I loaded one of my main Laravel sites in the browser, I was presented with a nice blank page (production client reporting is off as it should be), but with a 500 Internal Server Error
in the dev console. So let's tail the Nginx app log to see what is going on, using tail -f /var/log/nginx/example.com-error.log
:
2021/03/05 06:42:09 [error] 836#836: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught ErrorException: file_put_contents(/home/forge/example.com/storage/framework/views/2e31adb7dfd4e14cc6108d8b49272e43adaa7371.php): Failed to open stream: Permission denied in /home/forge/example.com/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:135
Stack trace:
#0 [internal function]: Illuminate\Foundation\Bootstrap\HandleExceptions->handleError()
#1 /home/forge/example.com/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php(135): file_put_contents()
#2 /home/forge/example.com/vendor/laravel/framework/src/Illuminate/View/Compilers/BladeCompiler.php(150): Illuminate\Filesystem\Filesystem->put()
#3 /home/forge/example.com/vendor/laravel/framework/src/Illuminate/View/Engines/CompilerEngine.php(51): Illuminate\View\Compilers\BladeCompiler->compile()
#4 /home/forge/example.com/vendor/facade/ignition/src/Views/Engines/CompilerEngine.php(37): Illuminate\View\Engines\CompilerEngine->get()
#5 /home/forge/example.com/vendor/laravel/fram...PHP message: PHP Fatal error: Uncaught ErrorException: file_put_contents(/home/forge/example.com/storage/framework/views/2e31adb7dfd4e14cc6108d8b49272e43adaa7371.php): Failed to open stream: Permission denied in /home/forge/example.com/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:135
Stack trace:
#0 [internal function]: Illuminate\Foundation\Bootstrap\HandleExceptions->handleError()
#1 /home/forge/example.com/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php(135): file_put_contents()
#2 /home/forge/example.com/vendor/laravel/framework/src/Illuminate/View/Compilers/BladeCompiler.php(150): Illuminate\Filesystem\Filesystem->put()
#3 /home/forge/example.com/vendor/laravel/framework/src/Illuminate/View/Engines/CompilerEngine.php(51): Illuminate\View\Compilers\BladeCompiler->compile()
#4 /home/forge/example.com/vendor/facade/ignition/src/Views/Engines/CompilerEngine.php(37): Illuminate\View\Engines\CompilerEngine->get()
At first glance it looks like Laravel doesn't have permissions to write to the storage/
folder.
After some digging, I realized that ownership and permissions for the storage/
folder have changed for some reason. I seem to recall a similar situation from a couple of years back.
An easy way to find if permissions are out of whack is to compare with another site that still works. Right away I noticed that the sub-folders in storage/
had 755
permissions instead of 775
.
# Before 755
ls -al /home/forge/example.com/storage/framework/
total 76
drwxr-xr-x 6 forge forge 4096 Mar 5 07:00 .
drwxr-xr-x 6 forge forge 4096 Aug 23 2019 ..
drwxr-xr-x 3 forge forge 4096 Aug 23 2019 cache
-rwxr-xr-x 1 forge forge 103 Aug 23 2019 .gitignore
drwxr-xr-x 2 forge forge 40960 Mar 5 06:16 sessions
drwxr-xr-x 2 forge forge 4096 Aug 23 2019 testing
drwxr-xr-x 2 forge forge 12288 Mar 5 06:37 views
# Recursively fix ownership and permissions
sudo chown -R forge:www-data storage
sudo chmod -R ug+w storage
# After 775
ls -al /home/forge/example.com/storage/framework/
total 48
drwxrwxr-x 6 forge forge 4096 Jan 19 2020 .
drwxrwxr-x 5 forge forge 4096 Jan 19 2020 ..
drwxrwxr-x 3 forge forge 4096 Jan 19 2020 cache
-rwxrwxr-x 1 forge forge 103 Jan 19 2020 .gitignore
drwxr-xr-x 2 forge forge 40960 Mar 5 06:16 sessions
drwxrwxr-x 2 forge forge 4096 Jan 19 2020 testing
drwxr-xr-x 2 forge forge 12288 Mar 5 06:37 views
php artisan cache:clear
composer dumpautoload
sudo service nginx restart
# OK
This concludes the PHP 7.4 -> 8.0 upgrade. All systems are green. Lessons were learned.
]]>I started out as a Mac hater or, at the very least, contemptuous of the Mac culture I perceived to be borderline fanatical.
You see, I was born and partially raised in Romania. After the fall of communism and the onset of democracy, the computer industry - more precisely the PC - exploded. Everyone got access to cheap PC hardware and all the pirated software they could handle. For a while it seemed like there wasn't a single piece of legal software in the whole country. I only slightly exaggerate, but for a newly emerging democracy it was par for the course. From regular people, to companies, to public institutions and government agencies, software piracy was as natural as three meals a day.
I still remember the Microsoft anti-piracy campaigns from the 90s and early 2000s. The educational propaganda was especially hilarious because it assumed people could actually afford to pay for software at western rates, when the average monthly income was $100 or less. For reference, my first full time software developer job in the early 2000s paid $250 a month.
I stopped pirating software a very long time ago, after income grew, and it became more worthwhile to buy the licensed product. Whatever your feelings about piracy, it did help create a class of skilled software developers that has thrived up to this day. Often you'll hear of Romanian (and other Eastern European) hackers in the news. You'll also find Romanian developers across the world, as many of us have left the country for better pastures.
During those early days, the PC was seen as the cheapest way to get into computers. You could piece it together from used parts traded from your friends. The hardware could run not just (pirated) Windows, but also Linux.
As for Apple computers, I don't remember hearing of anyone owning one before the 2000s. The Mac was perceived as a luxury product that only high-end agencies or wealthy people used.
After I moved to the US, the Mac became commonplace, but still not as ubiquitous as PCs. The companies I worked for were not fond of Macs, and they imprinted some of that disdain in me. At one point, a developer was hired, and they wanted a Mac. Needless to say, that person was scoffed at, and issued a PC.
I came out of this period with the utterly wrong and biased opinion that those who only wanted to use Macs were some kind of snob. Surely they were elitists who got a kick out of bragging about their overpriced hardware. I didn't get it until much later.
As the saying goes, "I'll try anything twice", or something like that. Fast-forward a few years, new company, new culture, most devs are using Macs by choice. I started out with a PC, but shortly made the fateful decision to inherit a Mac from a departing colleague.
It sounds dramatic, but that seemingly insignificant decision marked a turning point in my career. Getting used to a new OS and the associated philosophy took some time, but it was so worthwhile.
From that moment I became a true indie hacker. Don't get me wrong, I hacked before outside my day job, but it just wasn't as exciting. Simply having a Mac gave me an incentive to use it more.
I turned into the very type of person I used to mock. A Mac fanboi. There's something about the Mac that no other vendor has managed to replicate.
Mac fans often claim the reason why Macs are so good is that the hardware and software are perfectly integrated. That's definitely part of it.
The hardware is slick, solid, and has a premium feel. The Mac keyboard's shape and layout are consistent. Has Apple screwed up on the hardware side? Absolutely. We all know about the recent keyboard controversies, the removal of ports, the touch bar, and so on. There are encouraging signs that they have learned some lessons, if we are to believe rumors about the next gen M1X 14" and 16" MacBook Pros.
Speaking of the keyboard - let's ignore the failed experiment with butterfly switches - I consider the Mac keyboard to be one of the best I've ever used, particularly the Magic Keyboard 2. The scissor switches in this keyboard, as well as the recent generation of MacBooks, feel great, and make me a very efficient typist. This means fewer mistakes when I'm coding, so higher efficiency. Then again, keyboards are a very personal thing, and a lot of devs prefer long-travel mechanical keys. I'll admit I tried that, but it's not for me.
Even the keyboard layout is unmatched. The simplest thing, such as the location of the Cmd key, is an order of magnitude more ergonomic than other keyboards. It makes Cmd+C/Cmd+V second nature. How many times a day do you use that particular combo? Hundreds perhaps. It adds up over the course of years.
Moving on to the trackpad itself, it may sound like a banal feature, but I've never come across a better implementation than the glass MacBook trackpads post-2015 or so. What I love the most about it is that you can tap it anywhere (not just in the lower half like most Windows laptops), and it engages with a solid, distinct click. I like it so much that I got a wireless Magic Trackpad for my desk. I've never used a mouse with a Mac, just because I find the trackpad sublime.
MacOS itself is slick and quite beautiful, much more than Windows. I'm particularly fond of the thin, subtle window frames, buttons, and UI elements. Windows, by contrast, feels clunky and outdated. When I'm in Windows, I try to ignore the UI as much as possible, but on the Mac I'm always admiring the attention to detail. Big Sur has stepped it up another notch.
Tightly integrating hardware and software definitely has its advantages. Apple is able to precisely control and direct the experience. Everything works together efficiently, the downside being that it is not an open system. Hardware internals are limited to what Apple decides to offer in the current iteration. While you can transplant MacOS on unsupported hardware (Hackintosh), it doesn't make for a complete experience in my opinion.
I will admit that building a Hackintosh has crossed my mind a couple of times, to cut down on the number of machines, but be able to play games on powerful hardware like I can do on my PC.
A very important feature that makes it the ultimate dev machine for me, is the Unix/*NIX underpinnings. This makes it super easy to set up all the tools a web developer needs. Plus there's already a huge community and countless tools for the MacOS platform.
Windows is trying to catch up with WSL, but it's far from the streamlined Mac experience. I know that, because I worked professionally as a web dev in a Windows environment for several years and disliked many aspects of it.
It may sound shallow or extreme, but I decided I wasn't going to accept another position with a company that refused to issue the tool I need for maximum efficiency.
Many workplaces do not provide enough career growth opportunities in your direction of choice. Very often, the only thing you can do to advance your skills is to hack on side projects outside of work.
The Mac gives me joy from the moment I pick it up, and it motivated me to learn new technologies and build small projects on a daily basis. I've kept at it over the years, and it has become one of my main hobbies. It's how I learned Laravel, Vue, Electron, Svelte, and other technologies that I would never have at the chance to use at my day job at the time.
I can confidently say that a PC would have never provided the same motivation, but perhaps that's also because the PC was always an entertainment device for playing games, watching movies, shows, and so on. On the other hand, I've never played a game on a Mac; it's always been my dedicated developer machine.
The stack I mentioned - Laravel, Vue, Svelte, Electron - is my absolute favorite. I've had the good fortune to use part of it professionally. Same as the Mac, it brings me joy when I build something in it. My career might have had a very different trajectory, had the Mac not nudged it in this direction.
Predicting the future is impossible, but I like to believe that the Mac will continue to improve in meaningful ways, until I retire and beyond. I don't see myself writing code on another platform, though I am fully aware that things can change. I suppose I could try developing on a Linux machine, but it would not be the same. Even assuming I found decent hardware, the integration is not there, not to mention all the dev tools that already exist on the Mac.
Fortunately there are signs that Apple has listened to some of the negative feedback from the recent past, and is acting upon it. Rumor has it that in 2021 a new generation of MacBook Pros will be released, in 14" and 16" screen sizes, with a new M1X ARM processor, no touch bar 😳, plenty of ports, and MagSafe 🤯. That would be seriously impressive.
Even the first iteration of Apple silicon in the form of the M1 has been an excellent move. It once again differentiates the Mac from the rest of the market. Moreover, it introduces a new, powerful architecture that should provide a strong foundation for years to come.
It sounds backwards to end with "I just bought my first Mac", doesn't it? But it's true. The Macs I've used until now have been surplus hand-me-downs from places I worked at, and company machines. The one closest to my heart has been a 2015 MacBook Pro, the last of the "good" generation, with all the ports, physical function keys, and scissor switch keyboard.
Sadly this old workhorse has been getting on in years, and it was time to replace it. I've been watching the M1 with interest, and thought I should wait for the rumored 14" M1X. However, all reports and reviews indicated that the M1 Air was more than capable, and so...
After many years of using them, I finally bought my first Mac. It's a MacBook Air, with the 8-core GPU, 16GB RAM, and 512GB storage. I've had it for a week now, and I absolutely adore it. It's more performant than the old 15" (even my company-issued i9 16"), in a small package, with stellar battery life.
Here's to many joyful years hacking away on the M1 Air!
Goodbye old 15" friend! Hello new Air buddy!
]]>To recap, SVGX is an offline desktop SVG icon & asset manager for designers and developers. I made it to make my life working with SVG graphics easier.
SVGX is free to download and use, but I'm experimenting with monetization in the form of selling access to the source code. I call this model Gitware.
Get the app from the link above, and please vote for it on Product Hunt!
That is all! 🎉
]]>In an Electron app, the application menu consists of items such as File, Edit, View, and so on. Beside the standard menus found in most applications, you can also create your own menu entries with custom functionality.
Sometimes you want these entries to be enabled or disabled dynamically based on data in the application state. Here's how to toggle menu item visibility programmatically, from the application code - or Renderer process.
If you want to skip the bla bla, here's a simplified version of the workflow, which assumes the menu is built in index.js
.
// Main process (index.js)
const { Menu, ipcMain } = require('electron');
let mainWindow; // Main application window, created with new BrowserWindow({...}), code omitted for brevity
// Build the application menu
const menu = Menu.buildFromTemplate([
{
label: 'Edit',
submenu: [
{
id: 'revert-changes',
label: 'Revert Changes',
click: revertChanges,
enabled: false
}
]
},
]);
Menu.setApplicationMenu(menu);
// Forward the 'revertChanges' event to the Renderer process
function revertChanges() {
mainWindow.webContents.send('revertChanges'); // Discard the code changes
}
ipcMain
// Listen for an event from the Renderer process, and toggle the menu item accordingly
.on('originalFileModified', (events, args) => {
Menu.getApplicationMenu().getMenuItemById('revert-changes').enabled = args.originalFileModified;
});
I'm building SVGX, an Electron + Svelte app. I have a code area that I can edit. When the original code is modified, an 🟠 orange dot appears along with a revert icon. Clicking the icon reverts the code to the original state.
I wanted to have an option to Revert Changes under the Edit menu. As a UX improvement, I also wanted this entry to be disabled by default, until the code is modified, whereupon it would become enabled.
You can read more about the Main and Renderer processes in the official documentation, but here's how they fit into this scenario.
Main is responsible for creating the application menu and listening for events from the Renderer process.
Renderer emits events to the Main process.
I'm aiming for the following:
For simplicity, I will show only 3 of the modules involved in this process: index.js
, menu.js
, CodePane.svelte
, and will strip out most of the code, except for the relevant bits.
index.js
(Main process) is the entry point to the Electron app, responsible for creating the BrowserWindow
and the menu, among other things.menu.js
(Main process) is the array of custom menu entries that could just as well have been part of index.js
but I extracted here for readabilityCodePane.svelte
(Renderer process) is the Svelte component that displays the code/markup, and allows editingThe more fleshed-out solution is shown below, with comments for clarification.
index.js
const { app, Menu, ipcMain } = require('electron');
const { menuTemplate } = require('./lib/menu.js');
// ...
let mainWindow;
// ...
const createWindow = () => {
// Create the browser window
mainWindow = new BrowserWindow({
// ...
});
};
function createAppMenu() {
const menu = Menu.buildFromTemplate(menuTemplate);
Menu.setApplicationMenu(menu);
}
app.whenReady().then(() => {
createWindow();
createAppMenu();
});
ipcMain
// Forward the 'revertChanges' event to the Renderer process
.on('revertChanges', () => {
mainWindow.webContents.send('revertChanges'); // Discard the code changes
})
// Toggle the Edit > Revert menu option depending if the file was modified
// The "originalFileModified" event is emitted from the Renderer process (the Svelte component)
.on('originalFileModified', (events, args) => {
Menu.getApplicationMenu().getMenuItemById('revert-changes').enabled = args.originalFileModified;
});
menu.js
const { ipcMain } = require('electron')
module.exports = {
menuTemplate: [
// ...
{
label: 'Edit',
submenu: [
{ role: 'undo' },
{ role: 'redo' },
{ type: 'separator' },
{
id: 'revert-changes', // Needs an id so I can reference it easily
label: 'Revert Changes',
click: revertChanges,
enabled: false
},
{ type: 'separator' },
{ role: 'cut' },
{ role: 'copy' },
{ role: 'paste' },
// ...
},
// ...
]
}
// Emitting to index.js
function revertChanges() {
ipcMain.emit('revertChanges');
}
CodePane.svelte
<script>
const ipcRenderer = require("electron").ipcRenderer;
import { onMount } from "svelte";
import { originalFileModified } from "../store/svg";
// Watch the originalFileModified store value for changes...
// ... and fire an event to the Main process when a change occurs
$: {
ipcRenderer.send("originalFileModified", {
originalFileModified: $originalFileModified
});
}
onMount(() => {
ipcRenderer.on("revertChanges", (event, args) => {
// Logic for discarding the changes to the markup
// ...
});
});
</script>
I was stumped initially by how to enable and disable an Electron menu dynamically, but was certain there had to be a way. Sure enough, the key to the solution is this piece of code Menu.getApplicationMenu().getMenuItemById('revert-changes').enabled
in the Main process, which gets a reference to the menu item I'm targeting, then toggles it based on an event that was emitted from the Renderer process.
2020 has been a rollercoaster for most people, but despite a few lows I don't have reasons to complain, and for that I am grateful.
My personal life is... personal, and I don't see much point in bringing it into this tech-focused blog.
Career-wise, in 2020 I found new employment after programming in Laravel for 1.5 years for a company here in Illinois. The new company is actually one I had worked for before, and it felt good to return. Unfortunately the work is just plain PHP, and a lot more stressful, albeit compensated by the team and the company structure.
I'll admit that if it weren't for after-hours hacking in Laravel, Vue, Svelte, etc, I would go a little crazy if I couldn't find an outlet for my desire to code fun things.
Shortly after I joined the new/old company, the pandemic went into full force, and we went fully remote. I'll make no apologies about the fact that I consider WFH to be one of the best things about this train-wreck of a year. I strongly believe this is the way to go, especially in tech (or most office-type jobs for that matter). It saves so much time and energy commuting, reduces pollution and traffic congestion, and just saves resources in general.
The only downside for me is the lack of interaction with my co-workers. I'm talking about the water cooler type of interaction, not the work-related stuff which can be resolved quite easily over Slack or voice chat.
The blog is now just over 2 years old. In 2020, I've been posting a lot less, primarily because I started to exercise a lot more, which put a time crunch on my other activities outside work. My inspiration waned this year, but at the same time I preferred to use my time for coding rather than writing about it. When you have a full-time job, and a number of hobbies, you need to set priorities, and blogging suffered as a consequence.
I enjoy blogging, I really do. I get a lot of satisfaction out of crafting a good quality blog post, but there's the rub. Crafting an article, as opposed to writing it quickly, is a lengthy endeavor. It takes many hours to write the article itself, not to mention all the related research. I don't always have this time at my disposal, and usually I have to spread it out over several days.
Unfortunately, I still use Google Analytics to track visits to the blog. I've been meaning to search for a better, more ethical solution, but I haven't found the bandwidth yet.
Google Analytics (or GA as it's colloquially known) is hard to interpret and use by someone who doesn't spend hours in it every day. I don't have the time and patience to fine-comb the data and extract meaningful information out of it. It does seem though that the traffic has increased by a lot. That's good.
I need to move away from GA in 2021, but this means I'll probably have to run the new solution in parallel with GA for the whole year, so I can have a way to compare the data. Other providers have different ways of tracking visits.
Here are the most popular articles from the last 12 months, as determined by GA:
You'll notice several new popular articles around Livewire and Alpine.js. This year I've been using these two frameworks a lot in my side projects, and there's been a lot of interest surrounding them.
1Secret is one of my oldest and most complete side projects. To quote:
1Secret is a service that allows you to share sensitive data (text of files) through unique URLs that expire after a set time. Once the URL (or secret) expires, the data is destroyed on the server permanently.
The project has been stagnating for 2 years. Every year I keep telling myself "this is the year I will launch it officially", and then stuff happens, and I postpone it.
The truth is that I've been having internal debates about the direction of 1Secret. I strongly believe in the concept of transient secrets, and I use the service myself on a daily basis. I am not sure if it can provide enough value that people are willing to pay for it, and I am terrible at marketing which means I'm not confident I can make a good case for it.
One of the biggest hurdles to launching 1Secret is the pricing structure. I've been drafting up various price points, and I think I'm narrowing it down to a reasonable balance between affordability (for the user) and profitability (for me).
In 2021, I want to finally announce it to the public, in its current v1.x state. I will, however, end the free Premium sign-ups, meaning that whoever wants to use the more advanced features will have to subscribe.
I have a lot of ideas on how to improve the service, but that will require a v2.0, and likely a full re-write. Depending how v1.0 will be received, I'll choose a strategy when the time comes.
SVGX is a desktop app I started working on at the beginning of this year. It's an offline SVG icon library manager, and it came about from the way I use SVGs in my code. I like to download SVG icon libraries, then search for the ones I need. SVGX makes it super simple to search for icons, then copy the SVG markup so I can paste it in my code. No more manually searching, then opening it in a text editor, and so on.
There's a feature for previewing and generating background-image
CSS markup for repeating SVG backgrounds. Before I had SVGX, I was always hesitant to use repeating backgrounds in my designs due to how awkward it is to generate that code quickly.
An overarching feature is the live preview, and I recently expanded on that by allowing the markup to be edited and saved. Editing it will update the preview in real time, so you can quickly make some changes to an SVG file and see what the result is right away.
SVGX is almost ready to be released. I hoped to do it before the end of the year, but there are various tedious pre-launch tasks that need to be done, and this will take a while longer. I am releasing it under a new software distribution model that I coined up, called gitware. Basically pay $0 or more, depending on how useful you find it, but you can also buy access to the source code. The app is built in Electron + Svelte.
An expense tracker that I abandoned to focus on other projects. This year I let the domain lapse. Realistically I won't be returning to this one for a while.
I built a local mountain bike trail conditions notifier, as well as a personalized interface for Strava cycling data. Both of these projects are cycling-related, but they are mostly for my own use, and I don't really plan on making them available to the public.
One repo that gained some traction as well as a little Twitter buzz is my Svelte + Tailwind 2 starter template.
I also built this 1-page site to celebrate TailwindCSS' 2.0 new color palette. The site is built in Svelte, and comes with an associated repo.
In 2020 the Laravel ecosystem has continued to improve. I've been integrating Livewire and Alpine.js a lot more in my apps, but also writing less "pure" Laravel.
The biggest change has been my focus on Svelte and Electron. I've built Electron apps in the past, but I leveled up my knowledge this time around. I enjoy the idea of making offline desktop apps that don't require sign-ups or an account.
Weight lifting was my bread-and-butter for many years, but the pandemic finally put a stop to it. I closed my gym membership, and dug up my old weight set to continue lifting at home. Unfortunately, it wasn't the same. I tried to do it regularly, but an old shoulder injury flared up and prevented me from sticking to it consistently.
On the other hand, I picked up cycling this year, with a vengeance. I found a new passion for road cycling, but also improved my trail-riding skills. Between road and mountain, I rode 4000 miles (~6400 km) in 2020, starting in June. That's literally 10x more than 2019.
It helped that I made the decision to invest in a smart indoor trainer, which allowed me to continue riding once winter arrived in the Northern hemisphere. I've been Zwifting gleefully ever since. It has motivated me to ride even more than I was outdoors, for the simple reason that I can just hop on the bike anytime, and it's infinitely safer than outside on the streets.
The only downside to all this cycling is that I lost a lot of weight in the form of muscle mass, leading to the stereotypical "cyclist" physique. I also dropped down 2 shirt sizes, which doesn't make me very happy considering that most of my clothes don't fit anymore.
In 2020, I read about 14 regular books, but none of them were particularly inspiring. Among those, Seveneves by Neil Stephenson, and Cibola Burn (book 4 of the Expanse series) stand out.
I also (re)read 5 comic book series. Watchmen by Alan Moore, Bone by Jeff Smith, and Transmetropolitan by Warren Ellis are some of my all-time favorites. I reread them every few years.
I don't watch TV in the traditional sense, but I do watch a fair amount of movies and TV shows.
In 2020, some of the movies that stood out were Ford v Ferrari (2018), Terminator Dark Fate (2019), Zombieland Double Tap (2019), Jojo Rabbit (2019), 1917 (2019), The Lighthouse (2019), But I'm A Cheerleader (1999), Klaus (2019), and Just Mercy (2019). Notice there aren't any 2020 movies on the list, and that's because I don't feel anything that came out this year was particularly good.
I also watched individual seasons from various TV shows. Excellent ones include: Beforeigners S01 (2019), Catch 22 (2019), Star Trek Picard S01 (2020), and The Mandalorian S01 (2019). I have yet to watch season 2 of The Mandalorian.
A PC gamer through and through, I've been playing less and less over the years. In 2020, however, I discovered a new gem, in the form of Hades. What a masterpiece! If you like action RPGs, amazing art, and superb sound and voice design, this game is a must. No wonder it has won so many awards.
Apart from that, I've been dabbling in various other games in my Steam/GoG/Epic libraries.
Twitter remains my primary social media outlet. Building a following is hard work, especially when you don't set out to do it intentionally (via heavy self-promoting, etc).
I am happy, and grateful, to have gained 130 new followers in 2020, for a total of 170 at the end of the year. While it doesn't sound like much, it's a huge relative increase, and more than I hoped for. To those who follow me, thank you! To those who don't, please follow me 😿
I operate 2 additional Twitter accounts:
I am also present on:
Product Hunt - 38 followers
Indie Hackers - 8 followers - SVGX - 7 followers
Well, this didn't age well: "I don't expect things to change a lot in 2020" (me, end of 2019).
On the dev tech side, I'm looking forward to new versions of Laravel, Livewire/Alpine, Inertia.js, as well as more mature versions of Vue 3, and Svelte.
I have my sights set on a couple new languages/frameworks that I'd like to try, given time. I'll talk about them when the time comes, no need to jinx it just yet.
I fervently hope that in 2021 I'll make my first dollar from selling software I created. This statement might seem strange, but it's true - I've never made a single cent off of any software product. It bears some explaining, but that's a topic for another time.
I definitely exceeded my cycling goals for 2020, and I'm hoping for a similar trend in 2021. At the same time, I need to be careful to avoid injury, as both mountain biking and road cycling can be very dangerous.
Finally, health has been of the utmost importance in 2020, and it must continue to be so in 2021 and beyond.
If you've made it so far, thank you - you are awesome! I'll skip predictions for the next year, but I'll end it on this lukewarm note: may 2021 be an improvement over 2020, for everyone.
]]>This is not about the merits of SVGX. No, it's about the impostor syndrome that haunts many a developer, even after the product is done. I feel a little dirty and bothered by the idea of charging for something I made in my spare time, for my own use, that many would expect to be released as open source.
At the same time, I put lots of passion and many hours into building it. Surely it's worth something, right? Well, it might be worth a lot to a few people, and nothing to lots of people, but there's a good chance I'll never run into those few who find it valuable enough that they would pay for it.
Then there's also the psychological aspect of earning money from something I created. Knowing there are people who value my work to this extent, does a lot for my self-esteem.
The danger with any kind of software that you build for yourself, and then want to sell, is the possibility that you're the only one with that problem. Or, granted, part of a tiny minority. Which means the market for your thing might as well not exist. So why even bother charging for it?
Another school of thought says you should charge a lot more than you were going to initially. I tend to agree with some of the reasons, but this strategy works best when you already have a large audience, or when your product is already awaited with anticipation. None of this applies to me.
My audience, though growing, is still tiny, and mostly restricted to Twitter. So how do I grow an audience? I know some theory, but frankly I'm not a marketer, and self-promotion is somewhat distasteful to me. I dislike heavy marketing tactics, so I'm growing my "brand" the only way I know how: article by article, tweet by tweet.
From previous experience, the highest quality audience is built organically, albeit slowly. That's why I try to post good quality content, and refrain from noise and non-developer related stuff. If you're following my work, you're probably a developer like me, facing similar problems, and searching for similar solutions.
All this to say that I've decided to offer SVGX for the price of pay what you want, starting at $0. You can try it for free, for as long as you want, but if you find it useful, I wouldn't mind a "buy me a beer" kind of tip.
At the same time, I'm selling access to the SVGX source code on GitHub for a reasonable amount ($20 at the time of writing). For those interested in how a fairly complex Electron + Svelte app is built, I'm sure this will provide good value.
In summary:
I call this new software distribution model Gitware. Not the most inspiring term, but that's what I could come up with on short notice.
While Gitware may seem antithetical to the idea of open source, that's because it's not open source. It's not traditional closed source commercial software either. If you can imagine the Mercedes-Benz logo, with open source and closed source forming two of the points, then Gitware would be the third point.
Intermission
See the Mercedes logo above? It's plain SVG code that I pasted into the markdown of this article.
Incidentally, this is one of the things SVGX does really well. It can search across all your offline SVG icon libraries and quickly find what you're looking for. For this example, I searched for "mercedes", not even knowing if I had an icon for it. Sure enough, there's 1 result coming from a free library called Simpleicons, which I've downloaded to my local icon archive.
Now back to our regular programming
Gitware is my way of creating exposure and shining a light on my work. It may very well prove to be a flop. In fact, I suspect it's a very naive (to say the least) way to sell software. And that's ok, because building an audience is a lot more important for me at this stage than a few bucks.
Generating goodwill requires a lot of consistent work creating great software products, some of it free or open source, some paid. But there's a long windy road to get to the point where people are clamoring to give you their money, and trust is earned the hard way. Thankfully, I can gladly give SVGX - the app - away for free. I want it to be one of many more to come, as my repository of ideas is perpetually growing.
]]>BankAlt was a website hosted at bankalt.com, and its primary raison d'être was to facilitate the crafting process in World of Warcraft. 10+ years ago I used to be very involved in the game, and one of my trademarks was making lots of gold from gathering, crafting, and trading.
BankAlt is beautifully (but not entirely, for reasons I'll mention below) preserved at the Web Archive (aka Wayback Machine).
I have always been a PHP dev, so it stands to reason that the LAMP stack would power my first big side project.
First, a quick word on BankAlt's state of preservation at the Wayback Machine. The static pages are all there, but the dynamic parts are not. The back-end powering the dynamic data tables is now defunct.
My initial purpose of BankAlt was - as it often is - to scratch an itch, and make my life in WoW easier. WoW had, and most likely still does, a complex crafting tree. To craft a high-end item you would have to craft a lot of intermediary items, parts, and materials. But in order to juggle trading, the auction house, crafting, gathering, and make lots of profit in the process, I needed a tool that could automatically calculate the raw materials for any recipe or pattern, as well as the cost of those materials.
I could then use this information to find the best source of raw materials, and then sell the finished product for a hefty profit.
I can't pinpoint what made me build an actual site for this tool, but it must have been a love for World of Warcraft, combined with the desire to create, coupled with the excitement for solving a problem. By that point I had been a web developer for several years, but I had never attempted something of this magnitude.
Making money with BankAlt was definitely not the driving factor, nor was it a big priority later. Back at the time, banner ads were popular, and I thought I could fund the site with ads. That might have worked, had I spent time marketing it and spreading the word. Marketing has always been distasteful to me, so I kept sweeping it under the rug until the very end.
To me, ads are not the most ethical way to monetize a site, but if the product is useful and popular, and enough people are using it, ads can be worthwhile. A very good example is Photopea, which generates impressive revenue for its sole creator.
If I were to do it again today, I would probably use a mix of ads and ad-free premium subscription (with extra perks, of course). I would hold off on ads, though, until the trafic and usage were high enough. Putting ads on an emerging service helps only to drive away potential users.
The site ran for approximately 3 years, 2010 - 2013. I'm not even sure if I terminated it in 2013 or 2014. As my interest in WoW waned, I decided to shut it down, since I was barely covering my hosting costs with the small amount of advertising revenue that was coming in.
BankAlt made very little money, certainly not enough to offset the amount of work I put into it. I don't have any regrets - in fact I'm glad to have stuck with the project for so long. But to me, money is not the driving factor behind a side project, and maybe that's failure on my part. On the other hand, a revenue-generating project could ensure its long term viability, as well as providing motivation to keep hacking on it.
The true worth of this project comes from the fact that it allowed me to sharpen skills on a fun endeavor outside of work. A project like this boosts your confidence as a developer to an immeasurable degree. As BankAlt traffic and active users grew, I would get lots of warm fuzzies inside. This feeling persists even today to an extent.
Looking back, I am very proud to have built BankAlt. Frankly, I am impressed at past me for crafting it to such a high level of care and detail.
Sometimes I regret shutting it down, and I can't help wondering what might have been if I had continued to maintain the project. Realistically, my motivation collapsed soon after I stopped playing WoW, and I don't think I could have forced myself to run a tool I had very little interest in.
While browsing the Web Archive under the influence of nostalgia, a wild thought occurred. Wouldn't it be cool if I could resurrect the site for fun? It would be virtually useless to myself or current World of Warcraft players, but I would be proud to feature it under my portfolio.
The bankalt.com domain has long expired, and I do not plan on buying it back. For curiosity, I checked to see if it's available, and I was a little dismayed to find out that it is currently listed for $2900.
Consequently, buying the original domain is out of the question. If I do manage to restore the site to a working state, I will host it on one of my other domains. But if you, dear reader, have money burning a hole in your pocket, feel free to buy it back for me 🤑.
So, resurrection. How feasible is it? Until I dig deeper, I would say chances are pretty good. When I shuttered the site, I had the foresight to make multiple backups of the source code and the database.
All I have to do is to restore the database, put the source code in a folder on one of my VPS instances, and point Nginx to index.php
. The old site ran on an Apache server, but that shouldn't matter. Hopefully backward compatibility will handle most of the issues.
But wait, there's more! I had another wild idea. What if, in addition to resurrecting the original site, I were to rebuild it separately as a modern Laravel app? I think that would be pretty awesome too, just for the learning experience of porting such an old codebase.
This mini-project entails 2 phases:
Both phases are great candidates for how-to articles: "How to revive a legacy PHP application", and "How to rebuild a legacy PHP application in Laravel". Actual titles TBD.
Before I start any of this, I need to work through my side-project backlog and clear a couple of higher priority tasks. Expect phase 1 to commence in the first half of 2021, so stay tuned!
While the Wayback Machine keeps archives of all the pages, I wanted to capture those as screenshots and bring them closer to my heart, until - and if - I am able to resurrect BankAlt.
]]>I'm using Strava's API to pull my rides into a Laravel 8 app. I use this data to create statistics for various metrics that are important to me.
I want to display the data in 2 forms: tabular and charts. I also want to be able to filter it dynamically, without page loads, and I chose Livewire 2.x for this. As an avid cyclist, I ride both road and trail/singletrack. I have 2 bikes (road + mountain), and I want to track total stats, as well as individual stats for each bike.
For charting, I decided to use ApexCharts, a JavaScript charting library that meets my modest needs.
Now that I have these building blocks, how do I put them together? One of my requirements is to be able to update the charts dynamically with Livewire. It took me a while to figure out how to do this, but here it is.
Note During the time it took me to write this article, Andrés Santibáñez released a Livewire package for ApexCharts that should be more flexible for the general population. I think there is value in my own solution, not just for the learning aspect, but also because it's customized to my very specific requirements.
The first chart I wanted is a total of riding miles per year (as tracked by Strava). In addition, I also wanted to be able to filter the chart by bike, using a dropdown. So when I select a specific bike, the chart should update to display only the yearly distances for that bike.
Here's a gif of the final chart.
A Laravel app can be structured in many ways, but I ♥️ the Blade component system that was introduced in v7, so that's what I'm using. First, I started out by creating a regular view (resources/views/stats.blade.php
) where I can display my chart(s).
Then there are 2 components that do the heavy lifting:
Each generated component comes with a controller and a view. Here's a gist for the 2 controllers + 2 views, also embedded below:
Note File paths in the gist appear as app%Http%Livewire%Stats%DistanceByYear.php
instead of app/Http/Livewire/Stats/DistanceByYear.php
due to GitHub's inability to use slashes in the file name.
👉 The stats.blade.php
view is where I render multiple Livewire chart components. This also contains a bit of code which links the ApexCharts script from the official CDN and pushes it to the top of my JS scripts stack. For context, in my app.blade.php
I have a corresponding @stack('scripts')
right before the closing </body>
tag.
👉 The chart wrapper ApexCharts.php
must have a unique id $chartId
, to allow multiple chart instances on the same page. I experimented with passing a UUID but settled on a static identifier like "distance-by-year".
👉 To refresh the chart data when a filter is applied, I need to emit an event. Notice this part $this->emit("refreshChartData-{$this->chartId}", [...])
in DistanceByYear.php
. The event has a dynamic identifier which ensures that only a specific chart gets updated (in situations where multiple charts are on the same page). In this case, the event id resolves to refreshChartData-distance-by-year
. But on the same page I have another chart which is identified as distance-by-month
, and the corresponding event is refreshChartData-distance-by-month
. The second argument of the event emitter is the (optional) data payload. If you've used events in Vue, this pattern should look familiar.
👉 Emitting an event is only half the equation. To actually get the chart to update, I need to listen for the event, then call a couple of ApexCharts methods responsible for updating the chart data.
👉 Listening and reacting to a Livewire event turned out to be the hardest part to figure out. It's just not very clearly explained in the official documentation, or at least not in a way that makes sense to me. So after much experimentation and web searches, I arrived at the following ugly-duckling-yet-functional solution (see apex-charts-blade.php
):
document.addEventListener('livewire:load', () => {
@this.on('refreshChartData-{!! $chartId !!}', (chartData) => {
chart.updateOptions({
xaxis: {
categories: chartData.categories
}
});
chart.updateSeries([{
data: chartData.seriesData,
name: chartData.seriesName,
}]);
});
});
👉 The key part to listening in JavaScript to an event emitted in PHP/Livewire, seems to be wrapping everything in this:
document.addEventListener('livewire:load', () => {
@this.on('refreshChartData-{!! $chartId !!}', (chartData) => {
// do JavaScripty stuff with chartData
});
});
👉 Notice that I'm wrapping the entire JavaScript logic in a auto-executing function call (function () {...}())
. Tangentially, here's a good explainer for auto-executing functions in JavaScript. The reason I'm doing this is to isolate the scope of the chart
object to each individual instance. This allows me to refresh the chart data without re-instantiating the ApexCharts object, and prevents weird behavior with multiple globally defined chart
objects.
I hope this shed some light on how you might create a Livewire wrapper for the ApexCharts library.
There are several caveats to my approach:
Having said that, I'm happy with the way this turned out, especially with the learning process figuring out the intricacies of integrating Livewire and ApexCharts.
Finally, here's how two independently filtering charts behave on the same page.
]]>The application icon may seem like a minor detail, yet I consider it very important, not just for branding, but also as a sign that the app is complete.
Unfortunately there don't seem to be a lot of resources out there for how to actually create proper Mac and Windows (and Linux) icons for the final build. It took me a while to figure out, but eventually I got it.
SVGX is an Electron app built with Svelte, as well as Forge which is a helpful tool for creating and publishing such apps. I'm also using this template as a starter.
First, install the electron-icon-builder utility which generates the icons for you. Follow the instructions in the repo.
Next you'll run the command to generate a set of Mac/Windows/Linux icons from a single png
image. The source image should be at least 1024x1024 in size.
In my case, I ran this in the folder where my source image svgx-logo-v3-1024.png
is located, and outputted it to another folder called appicons
.
electron-icon-maker --input=svgx-logo-v3-1024.png --output=./appicons
Back in the Electron app directory, add the appropriate icon path to package.json
, before running the build command.
./src/icons/mac/icon.icns
./src/icons/win/icon.ico
./src/icons/png/1024x1024.png
{
"name": "...",
"productName": "...",
"version": "...",
"description": "...",
"main": "...",
"scripts": {
...
},
"keywords": [],
"author": "...",
"license": "MIT",
"config": {
"forge": {
"packagerConfig": {
"icon": "./src/icons/mac/icon.icns"
},
"makers": [
...
]
}
},
"dependencies": {
...
},
"devDependencies": {
...
}
}
I haven't figured out if there's a way to do this across platforms without modifying package.json
manually before building, but this works well enough and barely adds any overhead.
Run the command to generate the appropriate build for your OS. For Electron Forge, the command is npm run make
or yarn make
.
Profit!
]]>It uses Electron and Svelte, as well as Forge which is a helpful tool for creating and publishing such apps. I'm also using this template as a starter.
My plan was to offer two versions of the app: paid and demo. Notice I said "was" - I'm still debating the details. Anyway, I thought the demo would be a stripped edition of the full app, lacking certain features.
I decided that one way to accomplish this in an Electron app would be to create a couple of extra build tasks in package.json
, and then run the commands like so:
npm run start
or yarn start
- builds the full Mac versionnpm run start-demo
or yarn start-demo
- builds the demo Mac versionnpm run start-win
or yarn start-win
- builds the full Windows versionnpm run start-win-demo
or yarn start-win-demo
- builds the demo Windows versionEach of these tasks would export a DEMO
flag as an environment variable, that my app could use to conditionally "guard" features when the flag is false
.
Well, on Mac it's simple: just add export \"DEMO=yes\"
in the script (notice the escaped quotes), and call it a day. The Electron app would read the DEMO
variable with process.env.DEMO
. Simple, right?
Not so fast. It turns out you can't use this syntax to export environment variables in Windows (I use Git Bash for my terminal). The build process will fail with an error:
'export' is not recognized as an internal or external command, operable program or batch file.
I feel I should have known this, but I code almost exclusively on a Mac, so I never ran into this situation before. What does work is to use set \"DEMO=yes\"
instead.
So my script becomes what you see below:
{
"name": "...",
"productName": "...",
"version": "...",
"description": "...",
"main": "...",
"scripts": {
"start": "export \"DEMO=no\" && concurrently \"npm:svelte-dev\" \"electron-forge start\"",
"start-demo": "export \"DEMO=yes\" && concurrently \"npm:svelte-dev\" \"electron-forge start\"",
"start-win": "set \"DEMO=no\" && concurrently \"npm:svelte-dev\" \"electron-forge start\"",
"start-win-demo": "set \"DEMO=yes\" && concurrently \"npm:svelte-dev\" \"electron-forge start\"",
},
...
}
In summary:
export
on Macset
on WindowsIn 2019, Apple redeemed itself in the eyes of many developers, by releasing a new, much improved 16" model. This new laptop flagship represents a coy admission of past failure and an attempt at redemption. Luckily, I own both the 2015 15" and the 2019 16", and I use them on an almost-daily basis, so I am in a position to compare them from the point of view of a developer.
To cut right to the chase, my personal, biased opinion is that the 2015 15" MacBook Pro is overall a better machine than the 2019 16", for a developer, when cost is an issue. The things that I dislike the most (there are more, keep reading) about the 16" are: lack of ports, weight, Touch Bar.
After a brief spec comparison, I will dive into various features and components and square them off against each other. Keep in mind that this is a biased, subjective review, and your opinion may be the opposite. Let's proceed.
Quick disclosure: the 2015 MacBook is my personal machine that I've been using for ~4 years, while the 2019 16" was issued by my employer, and I'm using it for ~5 months.
2015 (mid) | 2019 | |
---|---|---|
Model | 15" MacBook Pro | 16" MacBook Pro |
CPU | 4-core i7 (2.5 GHz / 3.7 GHz Turbo) | 8-core i9 (2.4 GHz / 5.0 GHz Turbo) |
RAM | 16GB 1600MHz DDR3 | 32GB 2667MHz DDR4 |
SSD | 512GB | 1TB |
GPU | AMD Radeon R9 M370X 2 GB | AMD Radeon Pro 5300M 4 GB |
Display | 2880 x 1800 (220 ppi) | 3072 x 1920 (226 ppi) |
You'll notice that the new machine is approximately twice as "better" (2x!!!) on paper, in most categories. But is it really that more powerful? Debatable, and highly dependent on your use-case.
Winner: 16" 2019
2015 15" 3/5 ⭐⭐⭐️️️️️️️️️
2019 16" 4/5 ⭐⭐⭐⭐
Let's tackle the elephant in the room first. After 2015, the keyboard on subsequent generations has been widely reviled as one of the worst regressions to ever afflict a Mac. Apple changed the classic design from scissor to butterfly switches (mostly for aesthetics and a misguided desire to make everything slimmer), but in doing so it introduced two major problems: a lot of people were put off by the new tactile feel, and, more importantly, the new keyboards tended to break in record numbers (due to gunk getting stuck inside the much narrower confines).
Personally I know quite a few people with post-2015 MacBooks, and all but one have had various keys break, stop working, or had to be replaced entirely.
Keyboard feel is a... touchy subject (forgive the pun) and, having never used the butterfly generation, I can't really comment on it.
I am happy to report that the 2019 keyboard (which has reverted to the scissor design) is quite pleasurable to type on. I would put it above the 2015, for several reasons:
Yet, it's still not quite up there with the Apple Magic Keyboard 2, which I consider to be the best keyboard ever for coding/development.
My personal keyboard hierarchy is: Magic Keyboard 2 > 2019 16" MacBook Pro > 2015 15"/13" MacBook Pro. Keep in mind that the differences are very small, and I will just as happily use the 2015 keyboard vs the MK2.
It seems that Apple has finally solved the keyboard-go-bad issue with this generation, despite never acknowledging the flaws in the butterfly design.
Winner: 15" 2015
2015 15" 5/5 ⭐⭐⭐⭐⭐️️️️️️️️️
2019 16" 3/5 ⭐⭐⭐
The post-2015 touchpad has grown to a ludicrous size. It can be argued that the bigger surface area helps artists and designers (does it though?) but as a programmer I find it annoyingly and uselessly large, and it occasionally gets in the way as I type.
There was nothing wrong with the size of the 2015 touchpad - they could've added a couple mm on each side and called it a day.
In operation, I can't find any difference between the 2015 and the 2019 touchpads but I'll take points off of the 2019 just because I'm put off by the size.
Winner: 15" 2015
2015 15" 5/5 ⭐⭐⭐⭐⭐️️️️️️️️️
2019 16" 1/5 ⭐
To its credit, the 2019 16" MacBook Pro does have 4 USB-C ports (count them - 4!!!) but this pales in comparison to the old 2015's plethora of connection options: USB 3? Check! HDMI? Check! Thunderbolt? Check! SD card slot? Check!
I can't convey enough how annoyed I am that I have to use a stupid dongle to connect an external monitor. I'll admit that perhaps in a few more years there will be a lot more USB-C peripherals but for the time being I don't own a single USB-C device that can connect directly to my Mac.
Apple could have easily added an HDMI port and a couple USB 3 ports because the 16" is chunky enough to accommodate them. But hey, form over function 😎.
Winner: 15" 2015
2015 15" 5/5 ⭐⭐⭐⭐⭐️️️️️️️️️
2019 16" 1/5 ⭐
Though the MagSafe connector is technically a... connector, I wanted to discuss it in its own category because I'm deeply offended that they removed it. MagSafe is one of those brilliant-yet-simple innovations that brought so much utility and elegance to the Mac.
MagSafe was, naturally, replaced by one of the 4 generic USB-C ports, but if you think you can use any of them to charge your Mac, you'll be in for a treat. Thanks to another design flaw, it turns out that you should probably plug in your charging cable only on the right side, otherwise your computer may overheat.
Winner: 15" 2015
2015 15" 5/5 ⭐⭐⭐⭐⭐️️️️️️️️️
2019 16" 2/5 ⭐⭐
I really, really (really) wish the Touch Bar was optional. Subjectively, I find it not just useless, but actively annoying, as a programmer.
I tried to love it, I did. For a couple of months I gave it the benefit of the doubt and tried to integrate it into my workflow (whatever that means), but I found myself reaching out for the F-keys a lot more often than I needed the "context-aware" functions of the Touch Bar. In the end I flipped it in Preferences so that it acts as F-keys by default.
So now I'm left with a smooth, non-tactile strip of function keys that are worse in functionality, overall, than a regular keyboard, AND I have to pay extra for the privilege (well, not in this case because it's a work machine, but you get the point). Thanks, Apple!
Winner: 16" 2019 - barely
2015 15" 4/5 ⭐⭐⭐⭐️️️️️️️
2019 16" 5/5 ⭐⭐⭐⭐⭐
The 2019 model adds a tiny amount of real estate (16" vs 15.4") and more pixels, but the resulting PPI is very close (226 vs 220). The differentiators, however, are the higher brightness (500 nits vs 300), higher contrast (unspecified vs 900:1), and wider color gamut (P3 + True Tone vs sRGB).
In daily use, the 16" display does pop, but the difference is not as big as the specs might suggest. Now, if you're any kind of artist (and not merely a programmer), your bias will be stronger in favor of the display, and I can't fault you for that. For me, the 15" screen is good enough, and switching to the 16" is just the cherry on the cake.
Winner: 16" 2019 - barely
2015 15" 3/5 ⭐⭐⭐️️️️️️️
2019 16" 5/5 ⭐⭐⭐⭐⭐
On paper, the 2019 with the i9 CPU blows the i7 2015 out of the water. In real life usage, however, I can't tell the difference.
My work consists of web programming in PHP/Laravel/JS, with a couple VMs running in the background, and an occasional Docker container. My main IDE is PHPStorm, but I usually run VSCode in parallel. Firefox and Chrome are both running with a few dozen tabs open. I've also got various other apps running in the background, such as Slack, Spotify, etc. Rarely, I fire up Pixelmator Pro, Figma, or Gravit. Both machines work impeccably but neither feels faster than the other.
If you're just a programmer like me, and not involved in heavy video/image processing, it's very likely that the i9 CPU is massive overkill for 99% of cases. I will, however, give the 2019 a slight edge, just because it is objectively faster.
Finally, if I had to pay for the 16" out of my own pocket, I would get the i7 with 32GB of RAM, since the extra memory will make a bigger difference in my line of work than the additional cores.
Winner: 15" 2015
2015 15" 3/5 ⭐⭐⭐️️️️️️️
2019 16" 2/5 ⭐⭐
It's probably the i9 CPU's fault, but my 16" likes to run hot, despite not putting a lot of strain on it, aside from VirtualBox and the VMs I mentioned earlier. As a result, power tends to drain quickly when unplugged, even under light load. This is likely due to the fact that many apps aren't optimized for multi-core usage, overloading the one core instead.
Yet another reason to skip the i9 if you don't need the extra cores.
Winner: 16" 2019
2015 15" 2/5 ⭐⭐️️️️️️️
2019 16" 5/5 ⭐⭐⭐⭐⭐
Finally, a category where the 2019 MacBook Pro shines! The built-in speakers are the best I've ever used in any kind of portable machine. To quote Apple, "High‑fidelity six‑speaker system with force‑cancelling woofers".
I'm not in the least an audiophile but even I can appreciate the quality of these speakers. The sound is clear and crisp without distortion, as well as loud to the extent that I can't crank it up more than 50%.
In contrast, the speakers on the 2015 generation are plain ol' stereo, and rather muffled sounding, though still serviceable.
As such, the 2019 16" MacBook Pro is the clear winner in the audio category.
Winner: 16" 2019
2015 15" 1/5 ⭐️️️️️️️
2019 16" 5/5 ⭐⭐⭐⭐⭐
I love Touch ID. It makes authentication so much easier. 1Password integrates very well with Touch ID, too. You still need to use your master password occasionally to unlock the machine, for example after waking it from a long sleep, but that's actually a good security feature.
Winner: 16" 2019 - barely
2015 15" 2/5 ⭐⭐️️️️️️️
2019 16" 3/5 ⭐⭐⭐
I'll spare you the raw camera specs, but the 2016 does have a slightly improved webcam. Disappointingly, it could have been so much better for the price. As a result, it barely edges out the 2015 model.
Winner: 15" 2015
2015 15" 3/5 ⭐⭐⭐️️️️️️️
2019 16" 2/5 ⭐⭐
The 16" machine is big and heavy, despite lacking actual "Pro" features such as the various ports a power user might need. While sporting only a few mm more in length and width than the 2015 15", it feels like a dumbbell, making me afraid of holding it by the edge, in case it bends.
In truth, the build is very solid and there's nary a wiggle, creak, or flex. Yet, the 2015 is slimmer, lighter, and easier to hold in your lap.
As a bonus, here are a few more observations on the 2019 16" MacBook Pro.
Random crashes
Sometimes the machine crashes randomly for no apparent reason. The temperature rises, which causes the fan to spin fast. Meanwhile, both the image and controls freeze, and the only way to restore it is to power it off and on again. One time it froze during a video conference, but the sound was still going, so I could hear my team talking but I couldn't interact.
Luckily this doesn't happen very often, but it's unpredictable, and it doesn't look like the like macOS 10.15.6 (latest version as of this writing) fixed the problem.
Lifting the lid
It may sound like such a small detail, but I don't feel as confident lifting the lid of the 16" one-handed as I do with the 2015 15". There's a little too much "stickiness" (read tension) in the lid mechanism. I know, I know, it's nitpicking, but still...
Hinge smoothness
The 2015 15" has more of a linear feel to the lid mechanism, in that the tension feels constant throughout its travel. Not a good or bad thing, just different.
Power button crispness
I rather like how crisp the power button on the 16" is. You know, the same button that houses the Touch ID. It's very firm and clicky, with zero wobble, which is definitely not the case with the 2015 generation. But then again, we are talking about a one off, purposely designed button, vs a regular function-like key.
Adding up my purely subjective category scores...
2015 15" 41/60
2019 16" 38/60
That makes the 2015 15" MacBook Pro the winner by a small margin. Now, this was a foregone conclusion, a rigged comparison from the very start, considering my inherent biases.
Make no mistake, the 16" is an excellent machine, and a redemption of sorts for the poor hardware Apple put out during the "dark age" between 2015 and 2019. On the other hand, the 16" is what the immediate successor to the 2015 MacBook Pro should have been. Instead, it took 3 years for Apple to stumble around drunkenly, despite masses of developers telling them exactly what they wanted.
If it were up to me - and countless others, based on numerous discussions over the years - I would take the 2015 chassis, then transplant the keyboard, display, speakers, and CPU/RAM/GPU from the 2019 16", and call it a day. I would even keep the same 15" form factor. Oh fine, I'd add a couple USB-C ports.
All this to say that, for a developer, a fully decked-out 2019 16" MacBook Pro is probably overkill. You will likely not need the i9, although more RAM is always nice to have. For video or image processing, or gasp gaming, you'll certainly make use of the more powerful CPU in combo with the best GPU available.
As tested, my 2019 16" is $3300 retail - an eye-watering amount of money. Crazily enough, this is actually middle of the road. If you have unlimited funds, also a need for additional power and storage, you can spec it all the way to $6700.
I am fortunate and grateful that my employer has spent the money to equip our developers with top-notch ma99chines, but not everyone has this opportunity. So if you're considering a MacBook Pro from one of the "best generations" (i.e. 2015 or 2019), and you need to spend your own money, here's what I recommend.
The cheapest solution (and the one I'd probably choose if I had to buy another MacBook Pro) is to buy a used 2015 15" model from eBay or elsewhere. Expect to pay $1000 - $1300 for a well-equipped model (i7, 16 GB RAM, 512 GB / 1 TB SSD). There are signs that Apple will continue to support this hardware for a few more years so this should see you through quite a few projects.
One caveat to watch out for in used Macs is that this particular generation is susceptible to battery swelling. I know 3 different people (me included) who've suffered from this issue. Certain serial numbers are (or were) covered by Apple's battery recall, but others (like mine) aren't. However, if you've not shy about repairing your own stuff, the battery is relatively easy to replace on your own, and it only costs $40-60.
Another thing to watch out for, when buying used from 3rd parties, is the amount of wear and tear on the machine, although that's harder to judge by looking at some pictures. Just do your due diligence in vetting the seller and the product.
There's a good argument for holding off on buying a new Mac for the time being. Later this year, Apple is slated to release a new generation of Macs containing their own CPU design (Arm). By all accounts, this is a good thing, and I'm quite looking forward to it, even though I am not planning to buy a new Mac in the next few years.
Now, if you're absolutely dead-set on buying a new MacBook Pro right now, I think you'll do very well with the entry-level i7, 16 GB RAM, 512 GB SSD model. At $2400, that's anything but cheap. If you need more storage (32 GB RAM, 1 TB SSD), bump that up to $3000. And then you might as well get the i9 CPU and you'll end up at $3300. See how that works?
Luckily, there are frequent sales and deals on the base 16" model ($200-300 off), at Amazon, Bestbuy, Costco, etc. It's possible there'll be heavier discounts when the next generation hits.
Should you upgrade from the 2015 15" to a 2019 16"? If your 2015 Mac still works fine, no. Apart from some nice-to-haves like better speakers and Touch ID, you won't feel the difference. Even if your old machine is failing, I would still recommend buying a used one to hold you off until the next generation of Arm Macs.
That's it for my comparison review of the 2015 15" MacBook Pro vs 2019 16". If you own either of these machines - and they are in working order - pat yourself on the back because they are excellent and will take you a long way in your developer career!
]]>For about 3 years I found myself immersed in a variety of situations and environments that really made me want to code 24/7. Hence this blog and my Twitter handle @brbcoding, both harbingers of my personality. It has been a fun period during which I learned a ton of stuff, and hopefully I was able to spread some of it around.
During this time I was anticipating a moment when I would burn out and take a break from all the coding. This moment has come, and it coincides with the COVID-19 crysis.
While history undoubtedly will record a few extra volumes for 2020, one thing that changed in my professional life was my day job. I had worked as a Laravel dev for the past 2+ years but earlier this year I changed it for something more... sustainable. I am still a full-stack dev - I continue working with PHP and JavaScript - but the stack has no relation to Laravel, Vue, or any of the stuff I've been talking about here and on Twitter.
In other words, work is more intense and not as fun as working with Laravel. At the end of the day I am drained mentally, and it has become harder to make myself code as a hobby.
While the pandemic has transformed certain people into super-makers, whereby they've been cranking out one software product after another, I suspect some of those people were also part of the large amount of layoffs during this period. I would love for my side projects to be my day-job but thankfully I am still employed full time and none of my side projects have generated any money. Until that changes significantly, the day job is my main priority.
## Social distancing in the great outdoors
I will admit that quarantine and social distancing haven't changed my life a great deal. In fact, strictly on a personal level, it has been mostly improvements.
In the US, things haven't been locked down to the extent of other countries, for better or worse. This means that going outdoors remains an option, as long as distancing procedures are observed.
As summer rolls in, I'm feeling the call of the outdoors. There's something very cathartic about being alone on a trail, with nothing on my mind.
While trail riding is my main passion during the summer, it is highly dependent on trail conditions. To compensate, I've renewed my passion for road cycling, and I've been doing a lot of both.
As I write this, I've exercised for 10 days straight - a mix of trail/road cycling, and weightlifting. I entered a rhythm where I feel bored if I stop even for a day. Ironically - well, not really - I eat a lot more, but I'm getting leaner. So it's a win-win: I get fitter and healthier while eating anything I want in large enough quantities.
In July, I plan to step it up even more. I even signed up for a 600 mile/month cycling challenge, but realistically I don't know if I can pull that off. It's all in good fun though.
I don't plan to take part in any competition, but I'm a firm believer in self-improvement, so training for me is a way to get progressively better at a thing.
One of my favorite ways to unwind after a hard training session is to read a book. The latest is Dune, which I am revisiting after 25 years, in anticipation of the 2020 movie. Reading it as an adult makes me appreciate this timeless classic a lot more.
I think of myself as a gamer but that is only one aspect of what I like. The past month I've indulged in the final installment of Terraria v1.4, Journey's End. Terraria holds a spot in my top 5 games of all time, and the v1.4 content update does it great justice. For the fans, I completed a full run-through and even managed to craft Zenith, the most powerful weapon in the game.
I still code outside of work, but no more than ~30 minutes a day. My desire is to release SVGX to the public sooner rather than later, despite not being jam-packed with all the possible features. The problem, however, is that preparing for a launch is such an overwhelming task that I keep postponing it. Truthfully, I cannot say when the product launch will occur, but if you are interested you can sign up with your email to be notified.
Blogging is very time-consuming, and with my changing priorities and mental state, I've been very inconsistent about posting. As such, I'm shifting more towards a micro-blogging sort of approach, by tweeting developer-related stuff that I find interesting, rather than spending hours crafting a blog post. I rarely tweet non-developer things, so if you like your dev content focused, give me a follow 👉 @brbcoding.
Life ebs and flows, and so do our interests and priorities. I will go where it takes me. See you around!
]]>New for 2020:
Rolled over from 2019:
Pre-COVID 95% of my listening happened in the car, while commuting to work.
Post-COVID, since I've been working from home, you would think there are less to listen, but no. I get more listening time than ever, for virtue of the fact that now I can listen while exercising, and I do that on a strict day-on/day-off schedule. In addition, I've taken to listening while doing various tasks around the place, like preparing meals and other chores.
Well, ok, I still drive a little. Not driving a car for months is really bad for it, so I "exercise" it once a week, to keep the battery running, the fluids circulating, the brakes from rusting, and prevent the tires from fusing into the ground. And this makes another opportunity for listening.
I can't listen to any form of talk radio while I'm doing actual, creative work, although music is fine. A podcast requires my attention, and it only works for me while I'm performing mindless, physical tasks like driving and exercising.
Another aspect is that, over time, I've been slowly increasing the playback speed in my podcast app. It's a great technique for cramming more time into a limited listening schedule. So far I'm up to 1.4x, but I aim to raise it higher. I need time to get acclimated to each 0.1x increase, so I do it slowly, over a period of months.
In the latter half of 2019 a controversy brewed around Spotify and their podcasting practices. I was - and still am - a subscriber. I use it mainly as my music library, but I was very happy it also had all the podcasts I could ever want. Well, most great products/services/companies will screw up sooner or later, as revenue and hubris grow hand-in-hand, and unfortunately Spotify wasn't about to curb the trend.
When Spotify announced they would be inserting their own ads into podcasts (even for Premium users, and in addition to the ones embedded by the creators), it made subscribers understandably mad. There followed an exodus from the platform, and the race was on to discover new podcast providers.
This proved to be a good thing, for me at least, because I decided to give Pocket Casts a try. And boy, was it a huge improvement over Spotify! For context, I use the Android app on my phone. Not only is the app free (though I would buy it in a heartbeat if it still cost money), but it has an amazing workflow for listening to and managing podcasts. The queueing system and play controls (just to name a couple) are marvelous. Of course, it has all the podcasts I need.
If you haven't tried Pocket Casts yet, I highly recommend it. It is head and shoulders above Spotify's player in terms of features, stability, ease of use and so much more. It may very well make your life easier, and that's no exaggeration.
I am not going to rehash everything I said about the original list I posted last year so just read the 2019 article for more details.
Twitter N/A
Hosted by Caleb Porzio
Length 10m
What is it about? Caleb shares short, 10 minute thoughts and snippets from his experience building Livewire for the past 1+ year. I love these insightful, quick-fire episodes that don't require a big mental commitment, but at the same time manage to condense essential ideas in an easily-digestible format.
Twitter TwentyPercentTime
Hosted by the folks at Tighten
Length ~20-30m
What is it about? Tighten is a prominent company in the Laravel community, and I've had a soft-spot for them for a long time. This is not a new podcast, but after a hiatus, they are back with new content. New episodes focus on discussions with company employees on various developer-related topics, from code techniques to ops to procedures. Very insightful stuff that sheds light on the inner workings of a successful software consultancy.
Twitter N/A
Hosted by Jason McCreary and Jess Archer
Length ~20-30m
What is it about? Hosted by two prominent members of the Laravel community, the show discusses various programming techniques and challenges, testing and patterns. The technical discussion is slightly more in depth than other podcasts, but it's straight to the point and very easy to follow.
Twitter Ladybug Podcast
Hosted by Kelly Vaughn, Emma Bostian, and Ali Spittel
Length ~40m-1h+
What is it about? A podcast by lady developers with topics ranging from personal development to technical discussions and design, to soft skills and beginner-friendly advice.
Twitter Happy Dev
Hosted by James Brooks
Length ~40m-1h
What is it about? James is a core member of the Laravel team, and his show takes a different tack than other podcasts. Each episode is an interview with a prominent person from the Laravel community, discussing problems related to mental health that are well-known to affect developers.
]]>The cool thing about Cloudinary is that you only need to upload one version of an image (or video), typically in the largest and best quality possible, and it will handle all the resizing and compression for you, on the fly. They also offer a very generous free tier that is perfect for smaller projects.
Recently I launched a one-page static site to promote a new app I'm working on, SVGX.app. The site is hosted on Netlify and so are the images.
Dealing with static images - manually - is pretty annoying to me. For the first iteration of this site I decided to save all images as JPGs to take advantage of the compression. I also made specific sizes to avoid loading full-size images.
The problem, however, is that my best attempts at doing this pale in comparison to what a specialized tool can do. Worse, Chrome's Lighthouse audits complained about the inefficient way I served images. Additionally, it's hard to maintain different sizes and formats of the same image.
So I decided to roll up my sleeves and move those images to Cloudinary in the hope that it will improve the performance of the site. I wasn't wrong.
My Lighthouse scores went up from this:
To this:
To be clear, for the handful of images I'm working with, it made the most sense to upload and link them manually, and I didn't employ any of Cloudinary's automation tools or the API.
When working with Cloudinary, it is a good practice to upload the original images at the best possible resolution, in a lossless format such as PNG. This means that on your local machine you only need to store the master copy in the highest quality available.
Once you've signed up for an account and logged in, you can go to Media Library and hit the big Upload button to select the images you want. You may also organize your images into sub-folders if you wish, though I opted out of that.
In Cloudinary there's a Transformations menu that I wasted time in, but it turns out I didn't need to create any specific transformations beforehand. Those are mostly for automation, which I didn't require since I did all the linking manually.
All I had to do was to get the link for an image (the 🔗 icon when you hover over an image), and then apply the transformation parameters directly to the URL in my code.
The basic URL for the main (and largest) image on SVGX.app is given by Cloudinary as:
https://res.cloudinary.com/svgxapp/image/upload/v1588298610/svgx-app_tnkpa3.png
This will get the image in its original format. In my code, however, I've applied a few transformations in order to serve a more efficient version:
https://res.cloudinary.com/svgxapp/image/upload/f_auto,q_auto:good,w_600/v1588298610/svgx-app_tnkpa3
svgxapp
refers to the account that the image belongs to. Officially this is called the "Cloudinary cloud name".v1588298610
seems to be a "folder" identifier, i.e all the images in the same folder share the same identifier. However, there's also an actual folder hierarchy as well, for example this image https://res.cloudinary.com/svgxapp/image/upload/v1588298149/samples/sample.jpg
resides in the samples
folder.svgx-app_tnkpa3.png
. Note that the original name of the image is svgx-app.png
, but Cloudinary adds a random string at the end _tnkpa3
.svgx-app_tnkpa3
. The simple reason for that is the auto format that I will discuss in more detail below.Transformations can be added after the upload/
section of the URL. I've only used a tiny fraction of the available ones here, but you can read about them in more detail in the official docs. For my needs, these are more than enough.
f_auto
means "fetch format auto", and it allows me to leave out the file extension. This transformation tells Cloudinary to serve the most efficient image format supported by the client's browser. Ideally, this would be .webp
. More details here.q_auto:good
works with f_auto
to an extent, although I haven't dived too deep into that. Suffice to say that these two transformations, in concert, will ensure the browser receives the absolute best quality of the image in an ideal format.w_600
constrains the image to a maximum width of 600px while preserving proportions. For the image I linked above, this results in a 600px width and 450px height, starting from a 1600x1200 original.When an image is served with a new set of URL transformations, Cloudinary will generate a new version of the master image and apply those transformations to it. The new version will persist alongside the master. If you pay attention you'll notice that an image with a fresh URL transformation (that was never requested before) will take a second or so to load the first time, but going forward it will load practically instantly. Pretty neat!
First, check out the hero image for this article, as well as the thumbnail in the blog index. The URLs that generate them are:
<!-- hero -->
https://res.cloudinary.com/svgxapp/image/upload/f_auto,q_auto:good,e_vectorize,w_848/sample
<!-- thumbnail -->
https://res.cloudinary.com/svgxapp/image/upload/f_auto,q_auto:good,e_vectorize,w_343/sample
Thumbnail - 100px wide:
https://res.cloudinary.com/svgxapp/image/upload/f_auto,q_auto:good,w_100/v1588298610/svgx-app_tnkpa3
Thumbnail - 100px tall:
https://res.cloudinary.com/svgxapp/image/upload/f_auto,q_auto:good,h_100/v1588298610/svgx-app_tnkpa3
Thumbnail - 250px wide with a sepia filter:
https://res.cloudinary.com/svgxapp/image/upload/f_auto,q_auto:good,w_250,e_sepia:80/v1588298610/svgx-app_tnkpa3
There are numerous ways you can manipulate images through the Cloudinary URL transformations that are outside the scope of this article, so I will end it here. The takeaway is that, depending on your use case, you might get a big performance boost should you decide to host your static images and videos with a service like Cloudinary. More importantly, it removes most of the friction in having to deal with multiple image sizes, resolutions and so on.
Full disclosure: Cloudinary did NOT sponsor this article.
]]>Recently I've been working on a new desktop app using Svelte and Electron.
Electron uses Chromium as the browser engine, which means modern APIs are fully supported. In turn, this allows developers to build cross-platform apps with consistent and predictable behavior.
I made a Svelte-Electron-TailwindCSS starter template which should provide some insight into how a typical Svelte project is structured.
Application state can be kept in a store that looks like this. Mine consists of a single file named src/store.js
.
For this example, I'll store the state for the current theme (light/dark).
import { writable } from "svelte/store";
export const theme = writable('light');
The above translates to:
"Create a writeable (there are also read-only stores, not subject of this discussion) store called theme
, and initialize it with a default value of light
."
To use the store data, import the store in a component such as App.svelte
:
<script>
import {theme} from './store.js';
</script>
<h1>Theme: {$theme}</h1>
<button on:click={() => theme.set('light')}>
light
</button>
<button on:click={() => theme.set('dark')}>
dark
</button>
Initially, the app loads with a heading of "Theme: light". Additionally, there are two buttons that, when clicked, will change the stored theme
to either "light" or "dark".
You'll access the value of the theme
store using the $
symbol. You can change the value with .set(value)
.
Try out the example in the Svelte REPL
The above is cool, and it works well for cross-component communication, but refreshing the page will reset the state to the default 'light'.
For the app I'm building, I need to persist certain store values across refreshes and restarts. A simple solution is to save these variables to the underlying browser's localStorage
.
Let's modify the store to retrieve the default value from localStorage
.
import { writable } from "svelte/store";
const storedTheme = localStorage.getItem("theme");
export const theme = writable(storedTheme);
This alone won't work, because storedTheme
will evaluate to null
when there's nothing yet in localStorage
(for example when the app is first initialized).
Let's fix this by registering a subscriber:
import { writable } from "svelte/store";
const storedTheme = localStorage.getItem("theme");
export const theme = writable(storedTheme);
theme.subscribe(value => {
localStorage.setItem("theme", value === 'dark' ? 'dark' : 'light');
});
It took me a while to wrap by brain around this but essentially it creates a watcher of sorts that updates the value of layout
in the store, when it changes.
The cool thing is that it also saves the default value light
to localStorage
when it doesn't exist. You can test this by going into the browser's dev tools, deleting the key and refreshing the page. You'll notice that they key gets recreated and set to light
.
Now when you call theme.set('dark')
in your app, the subscriber will get triggered and set the value of theme
to dark
in localStorage
.
From now on, refreshing the page, or closing and opening it will persist whatever value got saved last.
localStorage
and securityThe complete example does not work in the Svelte REPL unfortunately, due to security issues related to localStorage
.
The problem with localStorage
is that it relies on the client's browser to handle values used by the web app. You can imagine how this could cause issues if the developer uses those values without validation or other measures to ensure the integrity of the data. So, for example, if the front-end passes some values from localStorage
to the back-end for processing and storing to a database, that data needs to be sanitized and validated properly, and definitely not trusted implicitly.
Then again, these problems should not be relevant, as long as the app runs strictly on the client side. For this example, theme
is used only for presentation purposes. Even if the client decides to "hack" the value of localStorage, what this will accomplish at most is to scramble the UI colors a bit.
In this guide I'll explain how Mac developers can install a Windows 10 Virtual Machine on their Apple computer, using VirtualBox.
A VM for a different platform than you are building on can be invaluable in testing:
Microsoft is kindly provinding a free Windows 10 virtual machine that you can download here and use for 90 days. Do so, after picking your VM platform of choice.
Since my Mac environment is setup mainly for Laravel development, and since I prefer Vagrant instead of Valet, that means I have both Vagrant and VirtualBox already installed.
My personal preference for the Windows 10 VM is to use VirtualBox, since I am partial to the GUI.
Make sure you have ~ 30 GB of space on your computer initially The VM comes in a ZIP file that is pretty substantial at > 7 GB and you'll have to unzip it as well. When you import the VM into VirtualBox, another ~14 GB will be created. You may delete the original download after importing, however.
For VirtualBox, you will have downloaded a file called MSEdge.Win10.VirtualBox.zip
.
After the download completes, extract the archive.
Go to the folder that was just extracted (it should be called MSEdge - Win10
). If VirtualBox is already installed, you can double-click the MSEdge - Win10.ovf
file to import it into VirtualBox. A settings window will appear. Accept the defaults and hit Import.
⏱ wait a few minutes for the import to finish. You should see something like the following.
Select the Win10 VM and click Start.
There will be a security prompt asking you go give VirtualBox access to your keyboard. Go to System Preferences > Security & Privacy > Privacy > Input Monitoring and check VirtualBox after unlocking the settings with your root password. This will require VirtualBox to restart, so if the VM has already booted, shut it down, then open VirtualBox and start the VM once again.
If everything worked correctly, you should see the following screen.
User: IEUser
Password: Passw0rd!
Boom, you're in!
Before starting any actual work, it's a very good idea to take a snapshot of the initial state. Why? Because this will allow you to use the VM even after the original 90 days have expired. Once it stops working you can restore it to this point and use it for another 90 days. Don't worry, this is not illegal, even Microsoft suggests this on the page you've just downloaded the VM from.
To take the snapshot, once the VM has booted and with the VM window active, in the VirtualBox VM application menu, select Machine > Take Snapshot... then give it a name and optional description when prompted, and click OK.
With the VM powered off, click the menu next to the name of the VM.
Then select the snapshot you wish to restore and click Start (for the latest), or Restore (for older snapshots).
Restore the latest (current) state of the VM
Restore a previous snapshot
You can back up the snapshots individually if you wish. The files are located in /Users/YourUser/VirtualBox VMs/MSEdge - Win10/Snapshots
.
Keep in mind that snapshots require additional storage space, and that can be a pretty steep price to pay. In my case, 3 snapshots take 26.7 GB. Ouch!
To transfer files back and forth between your computer and the VM, while the VM is running, in the VirtualBox application menu select Devices > Drag and Drop > Bidirectional.
Now you should be able to open a File Explorer window in the VM, then drag a file over from your Mac.
Similarly, you'll likely want to be able to copy/paste between your computer and the VM, so make sure to check Devices > Shared Clipboard > Bidirectional.
You'll find that some of the keyboard shortcuts you're used to on the Mac behave differently in the Windows VM. Here are a couple of the mappings I've discovered so far (as a general rule, use Control where you would use CMD):
If you are using this VM for development, I would highly recommend setting up the environment to your exact specs, then taking another snapshot. Do this at the beginning, so that you can still take advantage of the full 90 day activation period.
Some of the tools I installed on my fresh installation, for example, include:
I'll leave you with a screenshot of a new Svelte + Electron cross-platform app I'm working on, and how it looks on the Windows VM. Pretty nifty!
]]>Today I'm going to explain how to build a reusable SVG icon component using Laravel 7's new Blade components. For this, I'm going to use one of my favorite free SVG icon libraries, Feather.
First, let's take a look at the SVG markup behind a typical Feather icon, for example chevron-left.svg
.
<svg
xmlns="http://www.w3.org/2000/svg"
width="24"
height="24"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
class="feather feather-chevron-left"
>
<polyline points="15 18 9 12 15 6"></polyline>
</svg>
So how should our reusable component be structured? One way would be to extract the svg
element as the actual component, along with sensible defaults that are already provided for us. The definition of the vector (everything that's wrapped by the svg
tags) can live in its own Blade partial that can be slotted or included into the main component.
The documentation I linked above is pretty consistent but if you want something more visual, there's a free video on Laracasts that explains these new features very nicely.
To start, Laravel 7 introduced a new command to scaffold a component.
php artisan make:component Icon
This will generate 2 files: app/View/Components/Icon.php
and resources/views/components/icon.blade.php
, a class and the associated view respectively. You may use the --inline
switch to make an inline component (meaning no view), but I'm not going to do that here.
I like to call the component class the "component controller" for what it's worth, because it does act like a controller, in a sense.
Note that I called my reusable component
Icon
. I could have just as well called itSvg
,Feather
, or any number of things. I kind of like the idea of naming itFeather
, as a way to distinguish it from other, potential, SVG images or libraries. That way, I could keep a set of tight defaults very specific to each library or similar group of SVGs.
The freshly-generated class looks like this:
namespace App\View\Components;
use Illuminate\View\Component;
class Icon extends Component
{
public function __construct()
{
//
}
public function render()
{
return view('components.icon');
}
}
Let's populate it with some defaults.
namespace App\View\Components;
use Illuminate\View\Component;
class Icon extends Component
{
public $icon;
public $width;
public $height;
public $viewBox;
public $fill;
public $strokeWidth;
public $id;
public $class;
public function __construct(
$icon = null,
$width = 24,
$height = 24,
$viewBox = '24 24',
$fill = 'currentColor', // currentColor, none
$strokeWidth = 2,
$id = null,
$class = null
)
{
$this->icon = $icon;
$this->width = $width;
$this->height = $height;
$this->viewBox = $viewBox;
$this->fill = $fill;
$this->strokeWidth = $strokeWidth;
$this->id = $id ?? '';
$this->class = $class ?? '';
}
public function render()
{
return view('components.icon');
}
}
All we're doing here is taking all the attributes from the svg
element and injecting them into the constructor. We maintain the same defaults from the original SVG and save the attributes to public
properties.
There is no need to pass any data to the view, since all the public properties defined in this class are available to it.
The new icon.blade.php
view is very plain, containing only a div with a thoughtful quote.
<div>
<!-- Waste no more time arguing what a good man should be, be one. - Marcus Aurelius -->
</div>
Let's replace all of that with our svg
wrapper.
<svg
xmlns="http://www.w3.org/2000/svg"
width="{{ $width }}"
height="{{ $height }}"
viewBox="0 0 {{ $viewBox }}"
fill="{{ $fill }}"
stroke="currentColor"
stroke-width="{{ $strokeWidth }}"
stroke-linecap="round"
stroke-linejoin="round"
id="{{ $id }}"
{{ $attributes->merge(['class' => "feather feather-$icon $class"]) }}
>
@includeIf("icons.$icon")
</svg>
We are now referencing all those public properties that we assigned earlier in the class.
I chose to use an include rather than a slot, for convenience and to reduce duplication.
Two things are worth paying special attention to here.
First, there's $attributes->merge(['class' => "feather feather-$icon $class"])
, which this tells Laravel to merge some default attribute values with new ones that may be passed by the user. In this case, the svg
element will have a class of "feather feather-chevron-left" as default, but will also merge in additional classes provided by $class
. This should become more apparent farther down, with actual examples.
Second, @includeIf("icons.$icon")
is super-useful to prevent Laravel from blowing up if an invalid icon is requested and the include file can't be resolved.
The actual vector definitions for each icon live in tiny individual Blade templates. I keep mine in resources/views/icons/
. Each file is named feathericon-name.blade.php
. In our example, chevron-left.blade.php
will contain:
<path d="M7.05 9.293L6.343 10 12 15.657l1.414-1.414L9.172 10l4.242-4.243L12 4.343z"></path>
When the icon component is rendered, this snippet will be wrapped by the svg
stuff from earlier.
Finally we get to the "how to use it" part. I really like what Laravel 7 has done here. They've adopted a Vue-like syntax for what has essentially become a dynamic pseudo-HTML tag. You invoke a component with <x-component-name />
, so CoolIcon.php
+ cool-icon.blade.php
will correspond to a <x-cool-icon />
tag.
<x-icon icon="chevron-left" width=32 height=32 viewBox="20 20" strokeWidth=0 />
Here, I'm overwriting some of the defaults: bigger icon, smaller viewBox, etc.
Had I decided to use a slot instead, here's how it might have looked (not as clean, methinks):
<x-icon width=32 height=32 viewBox="20 20" strokeWidth=0>
@includeIf("icons.chevron-left")
</x-icon>
One thing that I don't really like but I don't have a solution for, is the fact that the IDE (PHPStorm in my case) doesn't know what to make of this new tag. I can add it to its list of accepted tags to prevent it from marking it as "Unknown html tag" but I still can't click through to the component definition.
This makes the API (the available props) opaque. For someone less experienced with the project and/or Laravel 7, it might be a bit of a hassle to find out what's going on. Overall though, I think the benefits of this new pattern far outweigh the drawbacks.
Here are just a few ways in which to use this component. For Feather Icons in particular, some icons have fill but no stroke, others have the opposite. Some icons look cleaner with varying stroke thicknesses, while others fit better if you tweak the viewBox independent of the size. All these SVG parameters are supported here, and of course you can add your own.
But what's even cooler is how seamlessly classes work. In these examples, I'm changing the appearance with TailwindCSS.
<x-icon icon="chevron-left" width=32 height=32 strokeWidth=0 />
<x-icon icon="external-link" width=32 height=32 fill="none" strokeWidth="2" class="text-blue-500" />
<x-icon icon="x" width=32 height=32 class="bg-red-500 text-white p-2 rounded-full" />
The hype around Laravel 7 was perfectly justified, as it has indeed brought a whole lot of cool features. The new components help propel the Blade templating engine to new heights, further strengthening an already rock-solid platform.
I always have some doubts whether a pattern is the best (hint: there's always something better), and this is no exception. While this new method of building reusable SVG components works well for me at this time, it is very possible that I will find a better way later. I will no doubt share that here if it happens but until then componentize away!.
]]>A label is a short text description, and it can be initially created with the secret. I also wanted the ability to edit it later, directly from the dashboard. Hence the idea of "in place or inline editing".
What follows is a complete guide on how I built this feature using Livewire and AlpineJS, a deadly combo on top of Laravel, that makes a lot of SPA-like behavior possible without writing complex JavaScript. Livewire brings the back-end reactivity while Alpine handles the UI interactions. So if you're a fan of PHP and Laravel in particular, give this a 👀.
TLDR Don't feel like reading through the entire thing? No problem, here's the repo so you can dive right in.
Update Additional tinkering revealed some quirks with nested Livewire components in combination with AlpineJS. Instead of rewriting the entire guide (for the 3rd time), I'll show you my solution at the end. The repo has already been updated to reflect the changes.
Jump to the update →
You'll find the code for this demo here. Currently it contains an additional Livewire component that handles real-time tag & text search filtering.
In your Laravel project (preferably Laravel 7.x), you'll need to install Livewire and AlpineJS.
Follow the official installation instructions. I skipped the config & vendor assets publishing part and did only the bare minimum:
composer require livewire/livewire
In resources/views/layouts/app.blade.php
import the Livewire assets (css + js):
...
<!-- Laravel <= 7 -->
@livewireStyles
<!-- optional for Laravel >= 7 -->
<livewire:styles/>
</head>
...
<!-- Laravel <= 7 -->
@livewireScripts
<!-- optional for Laravel >= 7 -->
<livewire:scripts/>
</body>
Notice the 2 ways of importing the assets, depending on your Laravel version. The first method works in all versions.
AlpineJS can be loaded from the CDN, which works just fine for me. Add it to app.blade.php
right above @livewireScripts
:
...
<script src="https://cdn.jsdelivr.net/gh/alpinejs/alpine@v2.1.2/dist/alpine.js" defer></script>
@livewireScripts
</body>
Luckily the demo project I created is already setup with data that can be reused. We're talking a list of "widgets" (random sentences), to which I randomly assigned a variable number of tags. The tags are color names.
Not shown is a column belonging to widgets
called short_id
, which mimics the short URL on 1Secret. Its purpose here is simply presentational - I want to display it as a default when the widget name is empty. This is what I'm starting from:
For this guide, I want to be able to edit the widget names in place. I won't be showing exactly what I did in 1Secret, to avoid exposing the internals, but the idea is similar.
Let's whip up a few simple requirements for this feature.
Data
NULL
in the databaseUI
short_id
will be displayed insteadThe view we're aiming to enhance is resources/views/livewire/widgets.blade.php
. The code that displays the widget name:
...
@foreach($widgets as $widget)
<div class="flex items-center justify-between p-2 -mx-2 hover:bg-gray-100">
{{ $widget->name }}
...
The first task is to replace the static widget name with a Livewire component.
This guide requires a single Livewire component that can be generated at the command line (I use a
as an alias for php artisan
).
php artisan livewire:make EditName
Two files will be generated: a view and a controller class.
The public properties in the controller are accessible from the view. Data flows back and forth as if by magic, no JS required. Of course, there is JS behind the scenes but the developer need not know about it. To achieve the desired interactivity, a little something extra is needed, and that's where AlpineJS comes in.
Back in resources/views/livewire/widgets.blade.php
, let's perform a simple swap with the newly created Livewire view component. Replace {{ $widget->name }}
with a Livewire directive:
...
@foreach($widgets as $widget)
<div class="flex items-center justify-between p-2 -mx-2 hover:bg-gray-100">
@livewire('edit-name', compact('widget'), key($widget->id))
...
Notice that the syntax is identical to Laravel's @include
directive. In addition, I'm passing the $widget
object to the Livewire component. For nested Livewire components (which is the case here, but may not be for you) it is strongly recommended to pass a unique value to key()
, just like in Vue. This will help Livewire identify the child item when the parent is updated.
In the newly created Livewire view:
<div class="p-2">
{{ $origName }}
</div>
CAUTION The Blade view must have one, and only one root element, in this case the div
. If you omit it and just use {{ $origName }}
, you'll get a ErrorException Undefined offset: 1
error, then spend half an hour like a doofus trying to figure out what you did wrong.
Before this can work, there's additional work to be done in EditName
, so let's open it up and add the following:
class EditName extends Component
{
public $origName; // initial widget name state
public function mount(Widget $widget)
{
$this->origName = $widget->name;
}
public function render()
{
return view('livewire.edit-name');
}
}
mount
essentially acts like __construct
and we can use it to initialize certain properties, such as the widget that I passed from the view. Here, $origName
will automatically become available to the view - just remember that it must be declared public.
CAUTION Though you may be tempted to do this...
public $widget;
public function mount(Widget $widget)
{
$this->widget = $widget;
}
... don't. Any public property will be exposed to the front-end via JavaScript, so if your widget object contains sensitive info (not the case here), you'll want to extract only the properties you actually need.
Now if we reload the page, everything should have been "rewired" but still look the same.
So I want the widget name to change into a text input when I click it. I suspect this might be doable with pure Livewire, but the additional server requests aren't justified especially since we're not passing data, but merely toggling the UI. That's where AlpineJS comes in. We've already installed it earlier so we're good to go.
First I'll show you the complete code for the UI interactions, then I'll explain it.
resources/views/livewire/edit-name.blade.php
<div
x-data="
{
isEditing: false,
isName: '{{ $isName }}',
focus: function() {
const textInput = this.$refs.textInput;
textInput.focus();
textInput.select();
}
}
"
x-cloak
>
<div
class="p-2"
x-show=!isEditing
>
<span
x-bind:class="{ 'font-bold': isName }"
x-on:click="isEditing = true; $nextTick(() => focus())"
>{{ $origName }}</span>
</div>
<div x-show=isEditing class="flex flex-col">
<form class="flex" wire:submit.prevent="save">
<input
type="text"
class="px-2 border border-gray-400 text-lg shadow-inner"
placeholder="100 characters max."
x-ref="textInput"
wire:model.lazy="newName"
x-on:keydown.enter="isEditing = false"
x-on:keydown.escape="isEditing = false"
>
<button type="button" class="px-1 ml-2 text-3xl" title="Cancel" x-on:click="isEditing = false">𐄂</button>
<button
type="submit"
class="px-1 ml-1 text-3xl font-bold text-green-600"
title="Save"
x-on:click="isEditing = false"
>✓</button>
</form>
<small class="text-xs">Enter to save, Esc to cancel</small>
</div>
</div>
Cool! Now that we have the UI interaction basics in place, some explanations are in order. Essentially there are two divs inside the root element, each holding a span and a form with an input field, respectively.
Right at the top, in the wrapper div, there's a x-data
Alpine directive (should look very familiar to Vue devs) that holds the state of the component as an object.
isEditing: false,
toggles the visibility of the span/input; it's what gives the illusion that we are editing the item inlineisName: '{{ $isName }}',
is calculated on the back-end and controls the font weight of the item (bold for actual widgets)focus
is a function that is used to place the cursor inside the text input and select the contentsBelow, x-cloak
is used to prevent the browser from flashing hidden content before styling is applied.
Moving on to the span
element, it is nested inside a parent div
whose visibility is... well... visible.
x-bind:class="{ 'font-bold': isName }"
will apply the font-bold
class if isName
is true. This isn't functional yet, it needs the logic from the back-end.x-on:click="isEditing = true; $nextTick(() => focus())"
performs two functions: first it hides the span while revealing the text input, second it calls the function that places the cursor in the input and selects the contents.$nextTick
is a savior. Without it, Alpine will try to invoke focus()
at the same time that it toggles visibility, but the DOM has not yet finished updating, so the input will not be focused after it becomes visible. With $nextTick
we are performing the two actions in a synchronous fashion, allowing the input to be rendered before interacting with it.The form
element containing the text input and the two buttons is inside a parent div
that is hidden by default. If we hadn't used x-cloak
, the form and its contents would briefly flash when the page is first loaded (or hard-reloaded).
submit
event using a Livewire directive this time, wire:submit.prevent="save"
. In English: "prevent the form from being submitted the usual way, instead call the save()
method on the back-end"x-ref="textInput"
provides a reference to the text input, that we can use in the focus()
function to focus inside itwire:model.lazy="newName"
is the second Livewire directive and its purpose is to bind the contents of the text input to the $newName
variable. This variable is not yet defined on the back-end, which is why the input is not pre-filled with the widget name. The lazy
modifier ensures that only 1 request is made to the back-end, when the input loses focus, instead of every keypress.x-on:keydown.enter
and x-on:keydown.escape
both perform the same action, namely to exit "edit mode"x-on:click
directive that also exits "edit mode"Attempting to save the new value will error out, of course, since the back-end isn't wired properly yet. Let's go and do that.
Once again, I'll dump the code in the Livewire controller class, then I'll explain it.
app/Http/Livewire/EditName.php
class EditName extends Component
{
public $widgetId;
public $shortId;
public $origName; // initial widget name state
public $newName; // dirty widget name state
public $isName; // determines whether to display it in bold text
public function mount(Widget $widget)
{
$this->widgetId = $widget->id;
$this->shortId = $widget->short_id;
$this->origName = $widget->name;
$this->init($widget); // initialize the component state
}
public function render()
{
return view('livewire.edit-name');
}
public function save()
{
$widget = Widget::findOrFail($this->widgetId);
$newName = (string)Str::of($this->newName)->trim()->substr(0, 100); // trim whitespace & more than 100 characters
$newName = $newName === $this->shortId ? null : $newName; // don't save it as widget name it if it's identical to the short_id
$widget->name = $newName ?? null;
$widget->save();
$this->init($widget); // re-initialize the component state with fresh data after saving
}
private function init(Widget $widget)
{
$this->origName = $widget->name ?: $this->shortId;
$this->newName = $this->origName;
$this->isName = $widget->name ?? false;
}
}
The mount()
method has grown quite a bit in size.
Now in addition to the widget name (origName
), I'm saving the widget id (so I can locate the record when I update it), the short id (will be used as a placeholder when the widget name is empty), a dirty state (newName
) that is used to bind the text input to, and a flag that toggles the font weight of the item.
An init()
method take care of setting the initial state whenever 1) the component is initiated, and 2) an item is saved/updated.
Finally the save()
method (which needs to be public) is the same we called earlier in the template with wire:submit.prevent="save"
.
At this point both the view and the controller should be wired up correctly. Let's fire it up.
Notice that the dirty state represented by $newName
will persist in the text input, should you cancel halfway through editing. This is a design choice I made, though it could have just as well cleared the input or reset it to the original value.
There you go, awesome inline editing capabilities with a minimum of JavaScript. If this isn't a new golden age for the monolith, I don't know what is!
The code for the demo should you wish to peruse it.
The purpose of the original guide was to show how inline editing can be done with Livewire and Alpine. Mission accomplished, however, I built this functionality on top of an existing project, nesting the edit-in-place component inside the previous Livewire component. So the (now) parent component deals with filtering items (or widgets as I call them) on the page through either text search or tag selection. At the same time, each widget's name can be edited in place.
Livewire has some rules and, dare I say, limitations around nested components. Here are some of these:
div
.key
prop with a unique value, otherwise Livewire will get confused when it tries to update the DOM (e.g. filtering items). An example of a unique value would be the current items's id.@foreach
), it should be the first line in the loop, i.e. it cannot be nested inside, say, another div.div
in the child component must not have Alpine directives assigned to it. In other words, if you want put x-data
on the root div, you'll have to nest another div inside it, and initiate Alpine inside that one. While this rule is illustrated in the code samples from the official documentation it is not explicitly mentioned. A fellow dev pointed it out on Github before I noticed it.I ran into some of these limitations while experimenting on how to fix the issues that started appearing after my original implementation.
Essentially what happened was that initial filtering (whether through text or tags) of widgets succeeded, meaning that the list of items was reduced properly. Removing the filter by deleting the text in the search box or deselecting the tags, however, produced garbled content, e.g. items not being actually restored to the correct state, or items being restored with the wrong tags. In addition, errors were thrown in the browser dev console and the JS functionality broke at this point, requiring a page reload before functionality could be restored.
So here's what I did to fix this. First, in the parent component resources/views/livewire/widgets.blade.php
.
Before
Inside the @foreach
is a div which contains, in order: the edit-in-place Livewire child component, and the list of tags for the current widget in the loop. This wrapper div is part of the problem, as it relates to the rules above.
...
@foreach($widgets as $widget)
<div class="flex items-center justify-between p-2 -mx-2 hover:bg-gray-100">
@livewire('edit-name', compact('widget'), key($widget->id))
@if($tags = $widget->tags)
<div class="-mx-1 text-right">
@foreach($tags as $tag)
<small class="mx-1 {{ in_array($tag->id, $filters) ? 'bg-blue-200 text-blue-900' : 'bg-gray-200 text-gray-900' }} rounded-full px-2 shadow">
{{ $tag->name }}
</small>
@endforeach
</div>
@endif
</div>
@endforeach
After
Now the Livewire child component becomes the first element in the loop. This takes care of one problem.
If you're wondering why this works now, I'm pretty certain it relates to the key
part I mentioned earlier. Previously, the wrapper div had no unique identifier assigned to it. This confused Livewire when the filters were removed, but now the first element in the loop is identified by key($widget->id)
, so items can be redrawn properly.
...
@foreach($widgets as $widget)
@livewire('edit-name', compact('widget'), key($widget->id))
@if($tags = $widget->tags)
<div class="mb-4 -mt-1 -mx-2">
@foreach($tags as $tag)
<small class="mx-1 {{ in_array($tag->id, $filters) ? 'bg-blue-200 text-blue-900' : 'bg-gray-200 text-gray-900' }} rounded-full px-2 shadow">
{{ $tag->name }}
</small>
@endforeach
</div>
@endif
@endforeach
Moving on to the child component, where the inline editing is handled, resources/views/livewire/edit-name.blade.php
.
Before
Alpine directives are on the root div. Now I know that this is not OK.
<div
x-data="
{
isEditing: false,
isName: '{{ $isName }}',
focus: function() {
const textInput = this.$refs.textInput;
textInput.focus();
textInput.select();
}
}
"
x-cloak
>
<!-- the rest of the code -->
</div>
After
Instead, I've added a wrapper div with some of the styling pulled from the parent component (after removing the div that previously wrapped the child). Now the desired functionality has been restored.
<div class="flex items-center justify-between -mx-2 hover:bg-gray-100">
<div
class="p-2"
x-show=!isEditing
class="flex items-center justify-between w-full"
x-data="
{
isEditing: false,
isName: '{{ $isName }}',
focus: function() {
const textInput = this.$refs.textInput;
textInput.focus();
textInput.select();
}
}
"
x-cloak
>
<!-- the rest of the code -->
</div>
</div>
But...
There still remains a minor annoyance that I'm momentarily at a loss for how to fix. Take a look:
This newly-discovered paradigm forced me to change the layout a little. While previously the widget name and tag list were displayed inline (name on the left, tags on the right), now the tags are below. Why? Because of what goes on in the loop:
Before
...
@foreach($widgets as $widget)
<div class="flex items-center justify-between">
<div>
<!-- Widget name -->
</div>
<div>
<!-- Widget tags -->
</div>
</div>
@endforeach
After
...
@foreach($widgets as $widget)
<div>
<!-- Widget name -->
</div>
<div>
<!-- Widget tags -->
</div>
@endforeach
Now granted, I have also experimented with moving the tags inside the child component, while also passing through the $filters
array from the parent. This worked, but now the filtered tags weren't highlighted anymore.
I suspect the broken highlighting micro-feature comes from the lack of reactivity between parent -> child, as documented here. To quote: "Nested components CAN accept data parameters from their parents, HOWEVER they are not reactive like props from a Vue component.".
And this makes a lot of sense, since I update the $filters
array in the parent.
At the end of the day this little annoyance is something that I managed to work around, but at the same time I believe it was worth mentioning for posterity.
]]>The official 6.0 -> 7.0 upgrade guide is good enough if you want the bare minimum, but for my own projects I chose to apply the diffs from the official repo instead.
Until the present, I've upgraded 3 of my projects to Laravel 7 and the upgrade times were decent, as summarized in this tweet:
🚀 Manually upgraded 3️⃣ #laravel 6.0 projects → 7.0 over 2 days.
— Placebo Domingo (@brbcoding) March 8, 2020
Including deployment, it took:
1st - 54 min - some dependencies caused issues
2nd - 20 min
3rd - 10 min
1/3
Right away a pattern emerged: my projects weren't overly complex, and all upgrades followed basically the same path. The hero image at the top summarizes the list of framework files that need to be upgraded. When a 4th project became an upgrade candidate, it got me thinking that I should perhaps automate this to an extent.
PHPStorm has this neat feature that can create a patch from a commit. I've used this many times before, to lift certain diffs and then reapply them somewhere else. I thought, what if I could lift the diffs from one Laravel project, and apply them to another?
Note You might be able to do the same with git
at the command line if you're a Git wizard. I've done it in the past but while I mostly use the command line, for certain tasks I prefer an IDE. Sadly I didn't document the specific commands I used and due to not using them on a regular basis, they've kinda vacated my brain.
So here are the steps I followed to transplant the Laravel 7 upgrade from Project A (previously upgraded to Laravel 7) to Project B (Laravel 6).
Version Control > Log
for Project A, select the Laravel 7 upgrade commit (the entire framework upgrade is part of a single commit, in my case)Create Patch
VCS > Apply Patch
and select the .patch
file created previouslycomposer.lock
from the file listroutes/web.php
, then add use Illuminate\Support\Facades\Route;
manually after. Note You may need to do this for other files in this list where you have custom changes, most notably routes/api.php
.MAIL_DRIVER
-> MAIL_MAILER
in .env
manually aftercomposer.lock
composer install
At this point the Project B should have been successfully upgraded to Laravel 7.
While it doesn't follow the pure definition of "automation", this technique worked really well for me. Your mileage will obviously vary, and the more complex your project the less likely it will go as smoothly for you.
Here's the file structure in text form, should you need to copy/paste any of it, and happy upgrading!
app/
Exceptions/
Handler.php
Http/
Middleware/
VerifyCsrfToken.php
Kernel.php
config/
app.php
cache.php
cors.php
filesystems.php
mail.php
queue.php
session.php
resources/land/en/
passwords.php
public/
.htaccess
routes/
api.php
console.php
web.php
.env.example
composer.json
composer.lock
phpunit.xml
]]>The blog engine itself is Tighten's Jigsaw, a Laravel static site builder that is perfect for my needs. Because it is Laravel/PHP based, the base template can be customized to a high extent, something that I have done here with some success.
Today I'll go into more detail about one customization in particular: the hero image featured at the top of every article.
Whether you've read my articles before or not, there are two types of hero images I typically use at the top of a blog post. There's the generic Unsplash image that bears a resemblance to the subject matter, like this very article for example. Then there's the more technical image that I create myself, such as this article about having Fun at the Laravel Console.
This boils down to either self-hosted or Unsplash images. The way each of these is generated differs slightly.
Disclaimer I write all articles in Markdown and I'm not sure how (and if) this would work in other formats.
Each Jigsaw Markdown post (located in {project}/source/_posts
) has a metadata section at the top, written in YAML front-matter. This defines various article-specific parameters.
The current article, for example, would have the following as built-in defaults:
---
extends: _layouts.post
section: content
title: How to Add an Unsplash or Custom Hero Image to a Jigsaw Article
date: 2020-03-10
description: A guide for adding a custom hero image programatically to a Jigsaw blog post.
tags: [jigsaw, laravel]
featured: false
---
It's a very clean and simple format that is self-explanatory in what it accomplishes, so I won't go into further detail here, but you can read more on the official documentation page.
The cool thing is that you can extend this metadata to the limits of your imagination. This is exactly what I did in order to automate displaying custom hero images at the top of each article. Let's find out how.
For Unsplash images, such as the current article, I've added these extra parameters:
---
# defaults
image: https://source.unsplash.com/6yjAC0-OwkA/?fit=max&w=1350
image_thumb: https://source.unsplash.com/6yjAC0-OwkA/?fit=max&w=200&q=75
image_author: Esteban Lopez
image_author_url: https://unsplash.com/@exxteban
image_unsplash: true
image_overlay_text:
---
The above will render the image along with the attribution right below it: Photo by Esteban Lopez on Unsplash. Both the site and the author are linked.
My article on the Laravel Console features a custom image that is self-hosted and saved in the /assets/img/
project directory. This is how the metadata looks:
# defaults
image: /assets/img/2020-02-10-laravel-console-fun.png
image_thumb: /assets/img/2020-02-10-laravel-console-fun.png
image_author:
image_author_url:
image_unsplash:
image_overlay_text:
You can omit the empty keys, of course. I choose to keep them around to remind myself they exist.
Simply adding the additional metadata won't magically cause the image to be rendered. The first thing to make that happen is to include a Blade partial at the top of the source/_layouts/post.blade.php
file.
@include('_partials.post-hero-image')
Then in source/_partials/post-hero-image.blade.php
I have the following:
@if($image = $page->image)
<section class="w-full flex flex-col items-center justify-center relative">
@if($imageOverlayText = $page->image_overlay_text)
<div
class="absolute font-black p-12 text-6xl rounded-full"
style="
color: #ff0a5c;
background-color: #ffeb3b;
filter: invert(1);
mix-blend-mode: exclusion;
transform: rotate(-5deg);
box-shadow: 15px 15px #ff0a5c;
text-shadow: 5px 5px 1px #05e2ff;
"
>
{{ $imageOverlayText }}
</div>
@endif
<img src="{{ $image }}" alt="{{ $page->imageAttribution() ?: $page->title }}">
@if($imageAttribution = $page->imageAttribution(true))
<small class="block text-center text-xs">
{!! $imageAttribution !!}
</small>
@endif
</section>
@endif
This part @if($imageOverlayText = $page->image_overlay_text)
is recent, and I'll circle back to it in a shortly.
Starting at the top, the entire block is wrapped in a check for the existence of an image source @if($image = $page->image)
. For Unsplash images it's a absolute URL, while for local images it's a relative path.
Next, the image is displayed <img src="{{ $image }}" alt="{{ $page->imageAttribution() ?: $page->title }}">
. The alt
text will be either the Unsplash author attribution, or the title of the page if the former is missing.
Finally, if there's an attribution (in other words an Unsplash image), I'll show the Photo by X on Unsplash
snippet below the image:
@if($imageAttribution = $page->imageAttribution(true))
<small class="block text-center text-xs">
{!! $imageAttribution !!}
</small>
@endif
The final piece of the puzzle is the custom $page->imageAttribution()
method, which I will explain next.
In Jigsaw, you can define your own global helper methods in /config.php
, inside the main array. Here's what imageAttribution()
looks like:
return [
// ...
'imageAttribution' => function ($page, $html = false) {
$str = '';
$image_author = $page->image_author;
$image_author_url = $page->image_author_url;
if ($image_author) {
$str .= "Photo by ";
if ($html) {
if ($image_author_url) {
$str .= '<a href="' . $image_author_url . '" title="' . $image_author . '">' . $image_author . '</a>';
} else {
$str .= "$image_author ($image_author_url)";
}
} else {
$str .= "$image_author";
}
}
if ($page->image_unsplash) {
if ($html) {
$str .= ' on <a href="https://unsplash.com" title="Unsplash">Unsplash</a>';
} else {
$str .= ' on Unsplash (https://unsplash.com)';
}
}
return $str;
},
];
I hope the above is self-documenting, but in a nutshell it's sole purpose is to render the Photo by X on Unsplash
snippet, with the choice of plain text or HTML (pass the true
argument). I use the HTML version for displaying below the image, while the plain text goes in the image alt
text.
My 2020 Tech Radar article features some funky text overlayed on top of an Unsplash image. This text is controlled by this post meta:
---
# ...
image_overlay_text: 2020 Tech Radar
---
Going back to source/_partials/post-hero-image.blade.php
, the following block controls whether this text is displayed:
@if($imageOverlayText = $page->image_overlay_text)
<div
class="absolute font-black p-12 text-6xl rounded-full"
style="
color: #ff0a5c;
background-color: #ffeb3b;
filter: invert(1);
mix-blend-mode: exclusion;
transform: rotate(-5deg);
box-shadow: 15px 15px #ff0a5c;
text-shadow: 5px 5px 1px #05e2ff;
"
>
{{ $imageOverlayText }}
</div>
@endif
There is an obvious issue with this approach: it is quite inflexible. I tweaked the text styling to work well with that particular image, but if I try to apply it to other images, it will likely look out of place.
I'm a fan of the Rule of three refactoring principle, so if I reach the point where I'm doing this 2-3 more times (each would have to be individually tweaked), one solution I could reach for is to add another meta parameter that points to a CSS class. I'd then move the styling to one of my SCSS files and simply apply the corresponding class to the snippet above.
I ❤️ how flexible Jigsaw is, and the extreme degree to which it can be customized. This is a relatively simple example of what can be done within this platform, but the world is your oyster, as they say.
]]>It's a more sustainable list this time, especially since I'm already using some of this tech in my projects.
Let's begin.
Alpine.js came out of the left field in 2019 and I've been quick to adopt it. I'm currently using it in several projects and I'll be using it in everything that needs interactions without the full power of Vue.
Livewire is the amazing Laravel front-end framework that brings a SPA-like feel to your monolith Laravel apps, and lets you write more PHP code and less JS. I've been circling the wagons around it but I feel like this year is when I will start integrating it into my projects, especially with v1.0 having been officially tagged recently.
I made a little demo for myself using Livewire, with the goal of finding out how it can be used to filter a list of items in real time. Here's the repo if you're interested.
Inertia is the other side of the coin in terms of Laravel front-end frameworks. While Livewire focuses on "more PHP, less JS", Inertia is the opposite: "more JS, less PHP", and uses the back-end framework (like Laravel) as a sort of impromptu API, but then allows you to build the front-end as a SPA within the same codebase. A great concept, and something that I would have used heavily a couple years ago when I was more into the SPA camp.
I'm more in favor of the monolith these days, which makes Inertia less suited for my requirements, but if ever need more complex SPA-like behavior, I'll be sure to reach for it.
I'm happy to say that I have finally started using Svelte for a couple of small experiments. I love the simplicity of it and how little boilerplate it has, even compared to Vue (which was pretty simple already).
I am seriously considering replacing Vue with Svelte, but my main concerns are integrating it with Laravel and Electron, so we'll see how that goes.
Building a Todo app was stupid simple, so check out the repo if you're interested. The best part about the Todo demo is that I was able to integrate TailwindCSS with Rollup and SASS/SCSS. This will provide a very solid starter foundation for future mini projects.
Then of course there's Sapper, the Svelte batteries-included framework. This brings it more in line with Vue and React but still compiles down to a smaller size, and is faster to render stuff.
The adoption rate for Svelte in the tech community may be puny compared to Vue/React but it's a fabulous piece of technology and I hope it carves itself a nice piece of the market. I, for one, will be using it more and more going forward.
Now that's pretty random isn't it? Why SVG? It's not exactly a branded technology. Well, lately I've become more interested in how SVG works and I'm starting to get it (barely). Tools such as Blobmaker and Waves are fascinating, and I'd like to build a similar utility myself.
I actually started building an SVG tool with Svelte but I'm not sure yet what direction it will take, and there are other priorities on my long list. One thing's certain: I will continue to explore the idea of generating SVG images programatically.
Building an app with Electron is not my first rodeo and hopefully won't be my last. The most complex app I made with Electron and Vue is a crypto portfolio. More Electron apps are coming out each day and for good reason: it allows JavaScript developers to build cross-platform apps using their favorite framework.
I have several ideas for offline desktop apps that would benefit enormously from Electron. While I currently have more Vue experience, I am fairly certain my next Electron app will be made with Svelte.
I mentioned my interest in SwiftUI last year too, and I'm including it here because it still holds my attention. Realistically, I will probably not have time to dabble.
Almost forgot this one but I've become tentatively interested in Crystal after hearing about it on No Plans To Merge. Seems to be a great language for building command line applications. I wish I had a use-case for it at the moment but I don't, so I'll shelve it under "cool stuff that I may or may not use at some point".
If there's a pattern here, it's that I am fascinated by way more technologies than I can give proper attention to. The good news is that in 2020 I've already used about half of these already, and there's still probably room for more.
Web tech is in constant flux, which is both a blessing and a curse for us developers. Personally I see it as a good thing, especially when it keeps us on our toes and makes us come back for more.
]]>artisan
(Laravel's CLI tool) recently, I was intrigued enough by the cute ASCII logo to take a look at the source code to see how it was made.
What interested me the most was not the logo itself, but rather the custom colors. I admit I haven't dug deep before into what makes these console commands tick. To my knowledge, Laravel doesn't offer custom colors out of the box.
I already knew that Laravel's console uses Symfony's Console Output Formatter package(s) under the hood, which in turn offer a variety of colors and styles that you can apply to your output.
Armed with this knowledge, the <fg=cyan>...</>
tags in Livewire's code made perfect sense now.
For my future convenience I created the following two commands:
The Ghostbusters logo was copied from this lovely repository of ASCII art. The Laravel console command in the gist generates the colored logo in the main article image.
Let's take a closer look at what colors and styles are available and how they can be applied.
Foreground colors
Usage:
$this->line('<fg=magenta>Magenta text</>');
Background colors
Usage:
$this->line('<bg=blue>Blue background</>');
Options
Usage:
$this->line('<options=bold>Bold text</>');
Styles
Usage (each style is its own individual tag):
$this->line('<question>question</question>');
Custom combos
You may certainly combine the above to produce custom effects.
Usage:
$this->line('<fg=blue;options=blink;bg=yellow>blue text on yellow background</>');
$this->line('Clickable URL: <href=https://github.com;fg=blue;options=underscore>github.com</>');
I don't know about you but I'll be sure to make my future Laravel console command output more colorful!
]]>Are you a developer who is feeling depressed or sad that their career doesn't seem to be going anywhere? Have you ever felt some of the following?
I'm stuck at my job and I can't get out because: - I'm too comfortable where I'm at - no other company would match my salary - the location or benefits are too hard to give up - I like my coworkers too much, or I made friends and I don't want to leave them behind
My technical knowledge is outdated because: - our stack is old - I'm maintaining a legacy project - company/department policy is preventing me from using modern tech - there's no motivation to learn anything new - no one on the team is excited about their career and/or sharing their knowledge with others - management seeks to maintain the status quo (there's actually a good case for this from a business point of view)
Impostor syndrome: - I feel stupid compared to devs working at other companies (despite building/managing an entire project/app on my own) - I feel overpaid for the technical knowledge I have - I don't feel capable of passing an interview - I don't feel capable of handling a complex project
I don't feel empowered: - my "rank" is too low to make any changes - no possibilities for promotion - change is discouraged
I'm bored / there's very little work to do.
The company's product or service is uninspiring, or I'm indifferent to the company's mission.
Other - tweet at me and I'll update the list.
I don't mean to trivialize your particular situation, but if you are employed consider this: it could be worse. You could be unemployed. Having a paying job (regardless how bad it seems) puts you in a position of power, and there are a few ways to leverage this. Read on.
If you feel like things are stagnant (tech stack, tooling, hardware, etc), slowly start advocating for change. You want to do it gradually, picking the low-hanging fruit at first.
Story time I once worked at a job where management had certain rules about software we were allowed to use, which included an archaic code editor. Since I spent a quite a few years there, I had become isolated from more modern tools and techniques. A new hire brought knowledge of this cool new IDE called PHPStorm. Intrigued, I quickly realized the benefits of this tool (read: a big increase in productivity). Being a lead, I leveraged my communication channel with upper management and started pushing for a switch to PHPStorm, all the while promoting the advantages it brought over our old tool. It took a few months but eventually we had ourselves shiny new PHPStorm licenses.
No matter your position, from intern to principal architect, your suggestions matter, and should be heard. A peer or someone in a more senior position will eventually notice, as long as you are thoughtful and provide good arguments for wanting change. Do it often enough, be mature about it, and don't make it sound like a complaint or criticism toward the company.
Just remember, in general companies are more willing to entertain ideas that 1) reduce costs and/or 2) increase profits. Smaller companies tend to be more flexible.
Frame your requests for change in terms of productivity gains, but back up your claims with solid research.
Being bored at work or not having enough to do doesn't mean you should count the minutes until 5 PM, or spend most of your time on social media or Reddit. Here are some suggestions:
Once in a while it's worth taking a step back and analyzing how comfortable you are in your (old) ways. That could be a sign of regression. Never stepping out of one's comfort zone can be reassuring, but also devastating to one's career. The world does not stand still, nor should your desire to push boundaries.
If your company does not or will not provide you with learning opportunities, your best option is to pick up the (virtual) books and study on your own in your spare time.
Even if your desired tech has no relation to your day job, some of those skill might still apply. Think concepts, techniques, patterns, best practices, etc.
When you surround yourself with things that you aspire towards, some of that will rub off of you. Hang out with other developers if you can, read articles and blogs, even developer-specific comics. Get hyped!
Keep reading for even more ways to become immersed.
Probably the best way to boost your skills and confidence as a programmer is to work on a side project. More on ideas for side projects below in the Common excuses section.
A side project is the gateway toward getting better at your current stack, or learning something new. Are you interested in Vue or React? No problem, build an app in either, while learning. Always wanted to build an iOS or Android app? Now's the chance.
Working on a side project has many more benefits than simply learning something new. It will put you above the majority of 9-5 developers, but it will also show potential employers how involved you are in your career.
Being a full stack developer is not everyone's cup of tea. Maybe you're really good at server-side development but not very comfortable on the front end. Or maybe you love UI but aren't exactly sure how that back end API works.
Maybe it's unreasonable to expect a developer to know the entire application cycle, but under today's (un)fortunate paradigm that is often the case. Now let's get one thing out of the way first: I, for one, am grateful for this paradigm, because it has forced me to adapt and, by extension, progress in my abilities.
My heartfelt advice is to stay open to learning anything even tangentially related to your career. You'll become a much more versatile developer once you've gained an understanding of how the entire process works. Some may say that it's better to specialize than to be the proverbial jack-of-all-trades. While I don't disagree, I still maintain that you'll end up an even better specialist if you're comfortable in another area.
Many full stack devs are polyglots by necessity, being required to know JavaScript and some form of back end language. That is a good thing. What I'm proposing goes even farther. If you can find it in you, try learning an entirely new language, even if you don't intend to make it your career. It can only open your mind and make you more receptive to ideas and concepts outside your sphere of thought.
A blog is a good place to jot down ideas and interesting techniques. Keep it focused on your end goal, if possible. What I mean by that is, if your goal is to enhance your career, perhaps it's not a great idea to post too often about your cats.
"What would I even talk about?" Don't overthink it. Learned something new? Write a blog post. Solved a problem? Blog about it.
"But nobody would read it / nobody knows I exist" Yep! If you don't already have a following, you will be talking to a wall for the next year or so. But if you stick to it and continue to produce quality content, people will eventually find you. Just be consistent about posting articles, and don't despair. I've had successful blogs in the past (in unrelated topics) and it takes a critical mass of articles, combined with time, before you'll see visitors.
"I don't want to pay for it" That's cool. Neither do I. This blog is hosted for free on Netlify. There are many free, statically-hosted solutions nowadays.
If there's one thing I regret, it's not joining Twitter under my developer persona years ago. Once again, I've had successful Twitter accounts in the past for other hobbies but back then I wasn't paying so much atttention to my career.
So what is Twitter good for? First, follow developers who produce quality content, or whom you admire. (Shameless plug → pls follow me k thx bye). This will likely inspire you but may have the side effect of intensifying impostor syndrome (don't worry though, it'll pass as you (re-)gain confidence in your abilities).
Second, tweet about developer problems and solutions you encounter. Post screenshots or gifs if you can. Take your time and craft a tweet before sending. People are more likely to notice a polished tweet.
Third, engage in quality discussions with other devs. Compliment, ask relevant questions or clarification, or offer insights. Try to keep it civil and friendly.
Not least, use Twitter to raise awareness on your own work, such as posting a new article on your blog, or working on a side project.
As with blogging, don't dwell on your initial lack of either followers or reactions to your tweets. For a very long time it may feel as if you're in a vacuum, but keep at it and you will succeed.
My final Twitter advice (some may disagree) is to keep your interactions focused on the persona you want to project. If your goal is to promote your developer side, stick to that. Don't meander too far into things like politics or unrelated subjects - at least until you have a bajillion followers. Some will argue that you should be yourself on Twitter (which I agree with - I am 100% genuine) but I also believe in separation of concerns. I won't mix my career with my hobbies and personal beliefs under the same account. I've unfollowed many brilliant developers who mainly shitpost or talk about random subjects. Let's face it, at the end of the day there's too much stuff in your timeline to filter through.
One thing that helped me become even more interested in the developer ecosystem was to listen to related podcasts. A few years ago I started listening to Full Stack Radio and my brain was suddenly flooded with a wealth of fascinating information. Every episode got me thinking about new concepts, and got me to play around with some of them. Of course, there are many other developer podcasts out there for you to choose. And if you have a long daily commute, it's the perfect time to listen to an episode.
Try to get your company (via your manager) to send you to a developer conference of your choice. Keep all the costs in mind. The best companies will pay for everything (conference cost, airfare, hotel, meals, transport, etc). If that's not in the budget, see if you can cover some of the costs yourself, or find a cheaper local conference or webinar. In general it's easier to get the company to pay for something that benefits your knowledge (as long as it can also benefit the company).
Even if the above is impossible, there are usually local meetups that you can attend evenings or weekends, especially around larger cities.
A conference or meetup has the benefit of putting you in contact and proximity with like-minded people, who are just as excited about learning something new. And you will pick up new things, I guarantee it. The best conferences (such as Laracon) will give you fond memories for years to come, and will make you feel part of something great.
Have you tried some of the above? Good! Now talk to your co-workers about them and get them excited too. Become a tech evangelist, if you will. I never tire of talking about my favorite technology. Share the knowledge you've gained and share the love. It is almost guaranteed that some of it will stick.
The best feeling is when your enthusiasm becomes contagious and a coworker adopts a technology they were ignorant or ambivalent about. Spoiler: it happened to me multiple times and I will make it happen again going forward.
To get a rusty old piece of machinery running can be hard until you apply a little oil in the right places. You can be the oil in the cogs.
Take a look at others whom you perceive to be successful. How did they get there? You'll probably view these people as being extremely smart, which I'm sure they are. Here's the thing: there's always someone smarter, and that hasn't stopped them from getting where they are now. There's no reason why you can't follow a similar path.
If you've been at the same company for a long time, your interviewing skills might be rusty, even assuming technical skills are good. Let's face it, most developer interviews are designed to test how good you are at interviewing, not at the actual job you'll be doing.
Once you start interviewing, don't get discouraged by rejections. Keep at it, refine the process, rinse and repeat. That impostor syndrome might even hit hard but you got this. More importantly, if you are already employed, there's nothing to lose, so try to approach it with that in mind.
My one other bit of advice is to aim for a position or company that you genuinely feel will put you in a happier place and help advance your career. So don't just move for the sake of moving, rather make a conscious effort to end up in a better place - one that will improve upon your present condition.
What is defined by "bad job" is pretty subjective and for you to determine. There are, however, a few ways in which your bad job can help overcome your condition.
The proverbial "kick in the ass". This could be your best motivation for seeking change.
Use it as leverage when job-hunting. Being employed will make you a more viable candidate than someone who isn't. Equally important, you don't have to accept a lesser offer, unless you really really (and I mean really) want to work for company X. (Ask yourself though, if company X is lowballing you, should you accept that offer?)
Stability. It sounds counter-intuitive, but sometimes a boring job or a stagnant career could mean long-term job security. Some people value this above all else. A stable (but boring) job with sensible hours and a decent salary can allow you to pursue other interests or hobbies outside of work. It's for you to decide if this is a worthy tradeoff.
That's perfectly fine. No one should require that from you. Do it only if you enjoy it.
The reality, though, is that we live in a future where there's a tremendous amount of knowledge expected from developers, especially those who consider themselves "full stack". If your day job doesn't provide opportunities for growth, or if you're stuck on a 10-year old stack, you'll realize your skills are outdated as soon as you start looking for a new job.
If you want to have a better shot at another job, and a shot at a better job, chances are you'll have to do a lot of studying outside work hours. At least until you secure that new position. Playing catch-up, though, can be frustrating and a lot harder than short study sessions over time.
Individual situations will vary but based on my personal experience and numerous conversations with various developers, it's usually not a matter of "not having time" but rather of "it's not high on my list of priorities".
Once you start framing activities in terms of priorities, you might discover that you can actually make time for this new thing if you switch priorities around, or eliminate the ones that don't bring value.
This requires a little bit of self reflection and awareness, but it makes sense to prioritize your most important goals. Maybe you can watch less TV or play fewer games. Maybe you can go out less often with your friends or spend less time on social media. It all depends how much you value one priority over another.
I would argue that motivation is not as important as discipline and consistency. There doesn't always have to exist a motivator; in fact often there isn't. If you're reading this, chances are you are going through some of these struggles or have in the past. In this case, the motivation can be one of: I want to improve my career; I want to be better at my job; I want to find another job; I want to feel more confident in my abilities.
Discipline, however, laughs in the face of motivation. If you tell yourself that regardless how you feel, you're gonna pick up that computer and spend 30m/1h learning something new every night, you're already ahead of most people. Then follow that up with consistency: do it regularly, even if you only have 10m to spare.
I do that sometimes before leaving for work in the morning, while sipping my coffee. If I have an idea I'm itching to try, I'll bang out a quick 10-20m coding session. By doing this, I'm usually left with a lingering sense of satisfaction for the rest of the day, because even if I'm dead tired in the evening, I know that I accomplished something that day (well, outside work, of course).
So you've decided you want to learn new things but have no idea where to begin? A good place would be a technology that you admire, aspire to learn, or would like to use at your next job. It's as simple as that. You won't even have to spend a penny because the internet holds countless free learning resources for anything you can imagine.
Phooey! There are ideas all around you.
What started out as a few bullet points floating around in my head ended up a lot longer than I anticipated. If you've made it this far, I am humbled and grateful, and I thank you for that.
After many years working in the software industry, I realized I had accumulated all these bits and pieces of acquired "wisdom", based on my own failings and redemptions. I experienced some of these pains and applied a lot of these techniques successfully. I've never felt stronger and more confident than I am today, at the peak of my career. And you know what? This journey has just begun.
I started by making small, incremental changes. I didn't have a Twitter account a few months ago. This blog didn't exist a little over a year ago. A couple years before that, I had built only one side project until that point. Before that, I had timidly begun to immerse in dev culture. Even farther back, I made the decision to rectify the slump I was in, through a career shift.
Few meaningful things happen overnight, so I'm going to leave you with an old cliché that says "the best time to make a change was X years ago; the second best time is now".
I hope you found this helpful.
]]>I've been using @calebporzio's Alpine.js in production on https://t.co/vMtgJGIPOK and it's awesome for simple dynamic functionality like this sign up form! pic.twitter.com/NmdUeAwM9m
— Placebo Domingo (@brbcoding) January 1, 2020
Today I'll explain how I built this.
You might be wondering what is Alpine.js. In essence it is a front-end micro-framework that lets you build dynamic behavior fast and easy, right in your DOM, with a minimum of actual JS. It was created seemingly overnight by the never-cease-to-amaze Caleb Porzio.
I recently implemented Stripe payments on 1secret.app and I thought it might be neat to allow the user to pay for a Premium subscription while they're signing up for a new account. They can also sign up for a Free account, then upgrade later from within the app.
I also wanted to update the text (along with the price) on the "Sign up" button when they select between the options Free, Monthly $10/m, and Yearly $95/y. So the button text would become Sign up for free, Sign up monthly for $10, etc. I've seen this pattern used before and I like it because it gives the user clear expectations of how much they will be charged (if at all).
The docs in the Alpine.js repo are fairly concise so I won't bother you with repeating everything. Let's dive into how I actually built this little feature.
Quick background: 1Secret is a Laravel app under the hood. All the behavior described below happens in Blade templates.
The sign up form lives in a register.blade.php
file. In addition, there's just one line of code that goes into the master layout template app.blade.php
. The latter is the template where all the content is yield
ed. The line in question is right before the closing body
tag as shown below:
views/layouts/app.blade.php
...
@yield('alpine')
</body>
</html>
Here's the simplified code for the register.blade.php
template (I've omitted the text inputs, classes, and a lot of layout stuff, for brevity). Also, I hope you'll forgive my code highlighter - it doesn't seem to do well with Blade syntax.
views/auth/register.blade.php
@extends('layouts.app')
@section('alpine')
<script src="https://cdn.jsdelivr.net/gh/alpinejs/alpine@v1.1.5/dist/alpine.js" defer></script>
@endsection
@section('content')
<form
id="payment-form"
method="POST"
action="{{ route('register') }}"
aria-label="Register"
x-data="{ selected: 'opt1' }"
>
@csrf
<!-- Email and password inputs -->
<label>
Plan <a href="{{ route('features') }}">see features</a>
</label>
<label for="plan-free">
<input x-on:click="selected = 'opt1'" id="plan-free" type="radio" name="plan" value="standard-free" checked>
Free
</label>
<label for="plan-monthly">
<input x-on:click="selected = 'opt2'" id="plan-monthly" type="radio" name="plan" value="premium-monthly">
Monthly - <strong>{{ $premiumMonthlyPrice }}</strong> / month
</label>
<label for="plan-yearly">
<input x-on:click="selected = 'opt3'" id="plan-yearly" type="radio" name="plan" value="premium-yearly">
Yearly - <strong>{{ $premiumYearlyPrice }}</strong> / year <span>- save <strong>$25</strong> per year</span>
</label>
<div x-show="selected !== 'opt1'" x-cloak>
<label x-show="selected !== 'opt1'" x-cloak for="card-element">
Credit or debit card
</label>
<div x-show="selected !== 'opt1'" x-cloak id="card-element">
<!-- A Stripe Element will be inserted here. -->
</div>
</div>
<div x-show="selected !== 'opt1'" x-cloak>
<!-- Stripe: used to display form errors. -->
</div>
<button
id="card-button"
name="submitPayment"
type="submit"
data-secret="{{ $intent->client_secret }}"
x-text="selected === 'opt1' ? 'Sign up for free' : (selected === 'opt2' ? 'Sign up monthly for ${{ $premiumMonthlyPrice }}' : (selected === 'opt3' ? 'Sign up yearly for ${{ $premiumYearlyPrice }}' : 'Sign up for free'))"
>
Sign up for free
</button>
</form>
@endsection
Let's run through smaller snippets of code in register.blade.php
, starting with how Alpine.js is loaded:
@section('alpine')
<script src="https://cdn.jsdelivr.net/gh/alpinejs/alpine@v1.1.5/dist/alpine.js" defer></script>
@endsection
Here, I'm loading it from the CDN, although you can also install it with npm
.
I opted to load the Alpine.js script only on the register page. To do that, I've defined @section('alpine')
, which I'm then yield
ing in app.blade.php
.
Note 1 At the moment this is the only page where I'm using Alpine but in the future I'll probably just
@include
the script tag in a Blade partial.Note 2
@push
-ing the script to a@stack
is generally a cleaner way. I should do that.
Next, you might have noticed the strange x-
directives in the HTML tags. These are what makes Alpine.js tick. You apply them to the DOM elements to control behavior. These should feel familiar to any Vue developer, and in fact that was Caleb's intention when he named them.
The key is the x-data
directive. This represents the "state" for all the child components, in the form of a JSON object. When you assign x-data
it is very important to put it on the correct element. So, for example, if you want to control a button on a form, you should put x-data
on a parent element. In this case I put it on the form element itself, because there are other things in there that I want to hide or show depending on this state.
Here, x-data
says that I want "option 1" (or opt1
, in other words "Free") to be selected when I first load the page.
<form
id="payment-form"
method="POST"
action="{{ route('register') }}"
aria-label="Register"
x-data="{ selected: 'opt1' }"
>
Moving on, each radio button has a x-on:click="selected = 'optX'"
directive. This says "when I click an option, I want the selected state to change to that option".
<label for="plan-free">
<input x-on:click="selected = 'opt1'" id="plan-free" type="radio" name="plan" value="standard-free" checked>
Free
</label>
To toggle visibility, I've sprinkled a few x-show
directives on the Stripe payment form elements. For example, this one x-show="selected !== 'opt1'"
says "show this element if the selected option is not 1", or in actual English, "hide the payment form if the Free plan is selected (opt1
)".
Finally, there's also a x-cloak
directive which prevents hidden elements from flashing briefly into visibility before Alpine.js has the chance to hide them.
<div x-show="selected !== 'opt1'" x-cloak>
<label x-show="selected !== 'opt1'" x-cloak for="card-element">
Credit or debit card
</label>
<div x-show="selected !== 'opt1'" x-cloak id="card-element">
<!-- A Stripe Element will be inserted here. -->
</div>
</div>
<div x-show="selected !== 'opt1'" x-cloak>
<!-- Stripe: used to display form errors. -->
</div>
If you're wondering why I'm using the same
x-show
on both parent and child elements, there was some weirdness going on with the Stripe form (which is rendered via magic provided bystripe.js
). This was my solution for handling that.
I also made an (even more simplified) Codepen demo for convenience.
See the Pen Payment Form by Constantin (@brbcoding-the-selector) on CodePen.
Alpine.js is one of the coolest things in the dev world that came out of 2019, in my opinion. It should make quick work of simple behavior that we traditionally turned to jQuery, Vue/React, or plain JavaScript for.
You might be thinking, why not just use jQuery, or plain JS then? Several reasons.
Like Tailwind CSS, Alpine.js might take a moment to click, but once it does, the possibilities are endless. So give it a try and build cool things!
If you'd like to give 1Secret a spin, you can use Stripe's test credit card number 4242 4242 4242 4242
to sign up for a free Premium account until the official launch that should happen later this year.
I'll preface this by clarifying that I'm trying to keep personal stories to a minimum, especially on this website and blog, but also on Twitter. My online presence is strictly focused on developer-related topics and I intend to keep it that way.
Briefly though, 2019 has brought more personal tragedy than previous years. Thankfully, things are settling down, but the reality is that as we grow older and feebler, so do those around us, and these tragedies become more commonplace.
I won't give too many details about my day job, but I've been working at the same company for the whole of 2019 and then some. In the spring I shipped a new project that has been running successfully in production ever since. It's something I built entirely on my own (team of 1) and I'm pretty happy with the way it turned out and how it performs. Very importantly, it has been running without a hitch and very little supervision, which allows me to go on a lengthy vacation without having to worry about it.
At work, most of our stack is Laravel, plus a sprinkling of Vue and other technologies. Myself, I use TailwindCSS for the projects I own. I am very grateful for this stack, as there aren't alternative technologies I'd rather use at the moment.
Another thing that I'm very satisfied with is the amount of new features I built in 2019, both on the project I own, as well as another that I collaborate on. I get a lot of satisfaction out of building things with code, and this has contributed to an overwhelming majority of days when I didn't dread going to work. Laravel has been a huge factor in this, as it makes problems a lot easier to solve and significantly more pleasurable.
It turns out that this blog is now a little over a year old. I say "blog" and not "site", because I started out with an entirely different concept and domain in 2018, only to change it later once I had a better idea what I wanted to do. The blog articles, however, were carried over.
For those who have no idea what I'm talking about, the site started out as a sort of "launchpad" for my personal projects. I intended for it to be an "organization" under which I would bring together the work I do in my spare time. I called it Omigo.sh, trying to be cute.
Towards the end of August 2019 I realized that it would have served me a lot better if I had made a portfolio website instead of an anonymous pseudo-organization. So I changed it to what you see today, preserving all the blog posts. Now, the landing page is a brief summary of my portfolio with a little bit about myself.
In 2019 I posted a total of 38 articles which seems way more than I remember writing. As with my tweets, I don't have any kind of schedule for writing articles. I do it when I have something to say, which could happen two days in a row or once every two weeks. My posting frequency is very much influenced by all my other hobbies and activities, as I have to share time between them.
I use Google Analytics to track visits to the blog, which helps stroke my ego occasionally when I check on it and notice the traffic increasing. I was pleasantly surprised to find that there's been a pretty healthy increase in visits since I switched over to the current domain.
There's been a lot of discussion in the community about the ethics of using Google products, especially Analytics. To tell the truth, I feel a bit icky myself using it, and I'm mulling switching to something else. I might just to that in 2020, but for now feel free to use an adblocker, because I have nothing to sell and I don't really care.
I write a lot about solutions and fixes to all kinds of edge cases and gotchas that I run into in my day-to-day work. This makes for some niche articles that are, understandably, not very popular with everyone. Here are some of my most visited articles in 2019 (from most popular to least):
2019 was the year I hoped to "launch" my nearest and dearest personal project to the world, and that is 1Secret. While 1Secret has been up and running for over a year with a tiny handful of active users, it's still technically in beta. You can sign up and start using it if you want, but it's just not there in terms of how I want it to work.
1Secret is a service that allows you to share sensitive data (text of files) through unique URLs that expire after a set time. Once the URL (or secret) expires, the data is destroyed on the server permanently.
While I'm fully aware that one should not delay a launch indefinitely until some imaginary goals are met, I'm just not comfortable announcing it at this point. There are some architecture changes that need to be done now rather than later, to ensure long-term viability. I'm not in any particular rush though.
This one's an expense tracker that I started over a year ago but shelved to focus on 1Secret. It's something I need for my own expense tracking needs and I really hope I'll be able to get back to it soon.
I came up with an idea related to mountain biking and MTB trails that I built a proof-of-concept for on a weekend. It's a very simple tool and quite niche but it looks promising, so I might build a very simple version of it.
My idea list is ever-expanding. I'm sometimes tempted to try some of them out but I want to resist being pulled in too many directions.
2019 has been the year of Laravel and Vue exclusively. The only other technologies (in the same ecosystem) that I've tried briefly are Livewire and Alpine, both by Caleb Porzio. I used the former on a little demo project, while the latter is currently used to build new dynamic functionality on 1Secret. They're both amazing and I'll definitely use them extensively going forward.
Health-wise I'm good, and there weren't significant changes in my fitness routine, except for one thing. I continue to hit the gym 2-3 times a week for weightlifting and then I usually ride bicycles on the weekend.
In 2019 I've transitioned more towards mountain biking, moving away from the road variety. This year I finally took the plunge and bought my own mountain bike. Even though it happened late in the year, it has already helped me improve my trail-riding skills. I signed up for a 2-day mountain bike camp in the spring of 2020 and I'm very excited about that.
In 2019 I only read 13 books unfortunately, which is less than the previous year. This can be explained by the other hobbies that demand my time, as well as the fact that few of these books have been truly engaging. Of note I'll mention the Dark Forest series by Cixin Liu, Kafka on the Shore by Haruki Murakami, and book 3 of the Expanse series, Abaddon's Gate.
I don't watch TV in the traditional sense but I do watch a fair amount of movies and TV shows.
In 2019 I saw 140 movies, but most of them were uninspiring to say the least. Some that I thought were excellent, in no particular order are: Bohemian Rhapsody (2018), Spider-Man Into the Spider-Verse (2018), Green Book (2018) and Boy (2010).
I also watched 14 seasons from various shows. A really good one is Good Omens, especially since it has only one season. All the shows I watched were high quality and that's because I won't commit if I don't like it, as opposed to movies which are a one-off affair.
I've also been an avid PC gamer, although less so the last couple years. In 2019 I've played mostly World of Warships of which I'm a big fan, with a little of The Adventures of Van Helsing and Grim Dawn. After the announcement of Diablo 4, as a huge fan of the series and someone who's played all of Blizzard's games, I was struck by nostalgia and started playing Diablo 2 again. Ironically, my high-end gaming PC is used to run a 20-year old game, in 800x600 resolution nonetheless. Finally, I just picked up Bloons TD 6 (a tower defense game) on a whim for $1 and it's been surprisingly fun.
I don't expect things to change a lot in 2020, but there are a few ways in which I'd like to progress.
On the dev tech side, I'm looking forward to Laravel 7 and 8. I like to stay on top of upgrading all my projects whether at work or at home. Then there's of course Vue.js 3 which brings some new paradigms (composition API and other goodies). Tailwind 2 is also on the horizon and it always gets me excited when there are new updates.
As far as new technologies, I hope to get the chance to work with Inertia.js and Livewire a lot more. There's also Svelte which I keep meaning to learn but there never seems to be time for that.
Outside of dev stuff, I want 2020 to be the year of "taking it to the next level" in terms of mountain biking. Not only am I taking part in the 2-day MTB camp I mentioned, but I'll have a full season of dry trails to practice on with my own bike.
I would hope to read more books in 2020 but that's not very likely, unless I stumble across really good books that are easy reads and keep me hooked.
Finally, I plan to remain as healthy as before and I also wish that upon everyone who chances upon this article.
If you've made it so far, thanks for reading my lengthy 2019 synopsis, and may 2020 be your best year yet!
]]>GuzzleHttp\Exception\RequestException : cURL error 60: SSL certificate problem: unable to get local issuer certificate (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
at C:\Users\MyUserName\code\myproject\vendor\guzzlehttp\guzzle\src\Handler\CurlFactory.php:201
197|
198| // Create a connection exception if it was a specific error code.
199| $error = isset($connectionErrors[$easy->errno])
200| ? new ConnectException($message, $easy->request, null, $ctx)
> 201| : new RequestException($message, $easy->request, $easy->response, null, $ctx);
202|
203| return \GuzzleHttp\Promise\rejection_for($error);
204| }
205|
Exception trace:
1 GuzzleHttp\Handler\CurlFactory::createRejection(Object(GuzzleHttp\Handler\EasyHandle))
C:\Users\MyUserName\code\myproject\vendor\guzzlehttp\guzzle\src\Handler\CurlFactory.php:155
2 GuzzleHttp\Handler\CurlFactory::finishError(Object(GuzzleHttp\Handler\CurlHandler), Object(GuzzleHttp\Handler\EasyHandle), Object(GuzzleHttp\Handler\CurlFactory))
C:\Users\MyUserName\code\myproject\vendor\guzzlehttp\guzzle\src\Handler\CurlFactory.php:105
Please use the argument -v to see more details.
I'm almost certain that the PHP 7.4 upgrade wasn't the only cause. Previously, I had screwed up a local SSL certificate that I was using for https
in the browser for my local projects.
The first thing I tried successfully was to ssh into the Vagrant box and run the artisan command from there. As expected this worked because Homestead is properly configured, including SSL certificates. If you don't care about being able to make Guzzle requests in your local terminal (using the locally-installed PHP), then try running it from the Vagrant box.
Since I wanted to be able to run the command in the Git Bash terminal, I had to fix the problem.
First, if you don't already have a generic SSL certificate (local/test environment only - NEVER USE THIS IN PRODUCTION), grab one from here. I keep mine in the home folder which is C:\Users\MyUserName\
on the PC.
Next, locate your PHP installation. In Windows, mine is at C:\php-7.4
. Open php.ini
, find the block show below and add the absolute path of the certificate to it:
[curl]
; A default value for the CURLOPT_CAINFO option. This is required to be an
; absolute path.
curl.cainfo = C:\Users\MyUserName\cacert.pem
That's it. Now you should be able to make Guzzle requests again from your local terminal.
]]>Upgrade to PHP 7.4 with Homebrew on Mac is a very succinct article by @brendt_gd that boils it down to two simple commands: brew update
and brew upgrade php
.
The problem I ran into was that my PHP CLI in the terminal remained linked to the previous version. Checking the version, before and after running the brew command produced the same result:
$ php -v
PHP 7.2.9 (cli) (built: Aug 21 2018 07:42:00) ( NTS )
Just to make sure 7.4 was actually installed, I ran the upgrade command again, then checked the actual location of PHP 7.4:
$ brew upgrade php
Warning: php 7.4.0 already installed
$ ls /usr/local/etc/php/7.4
OK
To switch the PHP CLI to 7.4, first I ran Homebrew's unlink/link command:
$ brew unlink php && brew link php
This should produce an output similar to this:
Unlinking /usr/local/Cellar/php/7.X... XX symlinks removed
Linking /usr/local/Cellar/php/7.4.0... 24 symlinks created
Finally, you need to export the proper path variable for the PHP executable in either .bashrc
or .zshrc
. These are typically located in your home (~
) folder:
$ cd ~
$ vi .zshrc
Locate the following (or similar) line...
export PATH=/usr/local/php5/bin:$PATH
... and change it to:
export PATH=/usr/local/bin/php:$PATH
Note
If you list the PHP executable...$ ls -al /usr/local/bin/php /usr/local/bin/php -> ../Cellar/php/7.4.0/bin/php
... you'll notice that
/usr/local/bin/php
is a symlink pointing to/usr/local/Cellar/php/7.4.0
which is the same location that was linked by Homebrew above.
Finally, run source .zshrc
to get the terminal to update its configuration.
For good measure, close the terminal window and open a fresh one. If you now run php -v
you should be rewarded with this:
$ php -v
PHP 7.4.0 (cli) (built: Nov 29 2019 16:18:44) ( NTS )
]]>The things I record most often are either in the browser or in some kind of text editor, mostly PHPStorm.
Please note that this is a Mac-only guide. I haven't needed to do this on a PC yet.
QuickTime Player - For basic screen recordings you don't need any fancy software because OSX comes with a built-in tool for this, though it's not very obvious. This tool is QuickTime Player.
Giphy - My tool of choice for converting a video to GIF is giphy. I believe in the past you could upload GIFs anonymously to giphy but that's no longer the case. I find that having an account is useful because you can reference your GIFs anytime you want. Here's my Giphy channel as an example.
(optional) PHPStorm - For code or script recordings, you can use your text editor of choice, but I prefer PHPStorm for the majority of my work. PHPStorm offers one feature that is very important for distraction-free screen recording, and that is presentation mode. In this mode, the current editor window covers the whole screen and all the menus are hidden.
Start the browser/text editor/app/document that you want to record.
Put the app in presentation mode if possible. Here are some instructions for my most used editors.
PHPStorm - Go to View > Appearance > Enter Presentation Mode. To exit, hover the mouse pointer at the top of the screen to reveal the main menu, then View > Appearance > Exit Presentation Mode.
VSCode - View > Appearance > Full Screen, then View > Appearance > Zen Mode. There's also the tabs area that I haven't found a menu option to toggle but you can avoid that by recording only part of the screen. Sorry but I'm not a power VSCode user - PHPStorm works really well out of the box for me, without endlessly customizing it and installing a few dozen plugins to get all the functionality I need.
Go to File > New Screen Recording. You may be asked to give QuickTime access to record your screen in System Preferences.
QuickTime's screen recorder, although very basic, offers some powerful tools. Among those, the ability to record the entire screen or only a portion of it. As part of the options, you can also choose to record mouse clicks or set a timer to make the recording start after a few seconds.
Recording a portion can be very handy when you want to capture only a certain part of the screen, while ignoring things like menus, scrollbars, irrelevant items on the screen, or other distractions.
To stop the QuickTime recording is a little tricky. I haven't found another way than to CMD-TAB back to QT, hit New Screen Recording again, then click the stop button when the toolbar appears at the bottom of the screen.
You can trim the clip immediately after recording (and before exporting) it. Typically I don't want very long pauses at the beginning or end, while I'm starting or stopping the recording. Hit Edit > Trim or CMD-T, then drag the yellow handles accordingly. There's a handy preview showing you the result.
It's also possible to trim the video after exporting it. To do that, open it with QuickTime (should be set as the default) from the exported location. Then go to Edit > Trim or CMD-T.
Once you've recorded your video, go to File > Export As... > 4K or 1080p. For me 1080p is sufficient.
In Giphy, hit Upload, then add the video you created earlier.
I'm not sure what Source URL is, but I assume it's there if you upload a video that you didn't author yourself, which is usually not the case for screen recordings.
I recommend tagging the clip, using comma separated terms, such as code editor, ide, php, phpstorm
.
Finally hit Upload to Giphy. Once it finishes uploading it directs you to the GIF's page where you can share it to social media or, my preference, Copy link. This option opens a dialog with several flavors of URLs. The "Short Link" is handy for sharing on Twitter.
I came across Kap, an "open-source screen recorder built with web technology" and I changed my workflow to accommodate it. It's simpler than QuickTime, it can capture portions of the screen, and it exports GIFs directly, so I can bypass Giphy. I highly recommend it. Mac only.
]]>http
traffic is automatically redirected to https
, you couldn't access the site non-securely either.
So how did it end up here? Well, I let my Forge subscription expire in the hope that I will score a Black Friday deal. After all, Forge is not critical in day-to-day operations. To me it's mostly useful for provisioning new sites. Now though, it turns out that Forge also manages SSL certificates, renewing them automatically. How does it do that? It's a bit of a mystery but I couldn't rely on it this time.
Fortunately this story will be short. Because I use Letsencrypt, I headed there for help, then I ended up on Certbot, an amazing automated tool that handles all the SSL heavy lifting for you.
If you hit that link, you'll notice it's already pre-configured with my own environment: Ubuntu 18.04 running Nginx. I literally followed the instructions on this page to a T.
The only thing to pay attention to is Step 4 where you have the options of either letting Certbot configure Nginx automatically with the new certificate or just getting the certificate (leaving you with the task to configure Nginx appropriately). To avoid any drama, I chose option 1 and Certbot did an amazing job of auto-configuring everything.
Note 1 During Step 4 you'll be asked for an email address. It's up to you if you want to provide one. See below.
Enter email address (used for urgent renewal and security notices)
If you really want to skip this, you can run the client with
--register-unsafely-without-email but make sure you then backup your account key
from /etc/letsencrypt/accounts
Note 2 If you are running multiple sites on the same server (like I do), don't worry. Certbot scans for all the sites and asks you for which domains you'd like a certificate.
Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: 1secret.app
2: www.1secret.app
3: allmy.sh
4: www.allmy.sh
5: ...
6: ...
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1,2
Note 3 A good tool to check the status of your SSL certificate is linked at the bottom of the Certbot instructions: SSL Labs.
There's really nothing more to it. If you made it this far, chances are you were able to install or renew your SSL certificate(s).
]]>The image above is a screenshot of the error thrown by my HeidiSQL (Windows) DB client.
I had just imported a large amount of data into a few different databases for a Laravel project. I use Homestead as my local dev environment for Laravel projects on both Mac and Windows.
Now this error in particular is not very helpful. It states that it can't create the database, without any context. It took some investigating before I found out what really caused it.
As you will later learn, Homestead creates a separate partition for storing the databases, and it provisions 10GB for this purpose. That should be more than enough for any amount of local apps, right? Well, sure, until you need to import production data on which to test certain functionality that you can't test without actual, live data.
Sidenote I'm well aware that one way to handle this would be to write seeders but that particular project wasn't well suited for that. I needed not just live data but historical data as well. Finally, it was a lot quicker to import from prod than to write complex seeders.
Disclaimer This process was a lot of trial-and-error and I bungled some steps, but as I've mentioned before, my specialty is not sysadmin and the end result is close to what I wanted.
WARNING Try this at your own risk and only in your local environment, never in production, unless you really know what you're doing. But then you wouldn't be reading this article 😉
First thing was to find out more about this error. So I SSH
ed into my Vagrant box (vagrant ssh
) and ran perror
on error 1006
:
$ perror 1006
MySQL error code 1006 (ER_CANT_CREATE_DB): Can't create database '%-.192s' (errno: %d)
Digging around the web I found a suggestion to fix this using mysql_upgrade
(spoiler: it doesn't):
$ mysql_upgrade
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
mysql_upgrade: [ERROR] 3: Error writing file './mysql/#sql-4876_5.frm' (Errcode: 28 - No space left on device)
Let's run perror
again, on error 28
this time:
$ perror 28
OS error code 28: No space left on device
Ah now I'm getting somewhere. Let me check the disk space real quick (more helpful Linux commands here):
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 967M 0 967M 0% /dev
tmpfs 200M 7.0M 193M 4% /run
/dev/mapper/homestead--vg-root 18G 5.1G 12G 31% /
tmpfs 997M 8.0K 997M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 997M 0 997M 0% /sys/fs/cgroup
/dev/mapper/homestead--vg-mysql--master 9.8G 9.3G 0 100% /homestead-vg/master
vagrant 953G 167G 786G 18% /vagrant
home_vagrant_dbone 953G 167G 786G 18% /home/vagrant/dbone
home_vagrant_dbtwo 953G 167G 786G 18% /home/vagrant/dbtwo
home_vagrant_dbthree 953G 167G 786G 18% /home/vagrant/dbthree
...
tmpfs 200M 0 200M 0% /run/user/1000
Note that I anonymized the actual databases to "dbone, dbtwo, etc" for this example.
The line /dev/mapper/homestead--vg-mysql--master 9.8G 9.3G 0 100% /homestead-vg/master
indicates that the MySQL partition is full.
How do I know it is the DB partition? The database files are usually stored in /var/lib/mysql
on Ubuntu Linux.
$ ls -al /var/lib/mysql
lrwxrwxrwx 1 root root 20 Sep 29 12:53 /var/lib/mysql -> /homestead-vg/master
This shows that /var/lib/mysql
is aliased to /homestead-vg/master
.
Run again with trailing /
to see the actual contents:
$ ls -al /var/lib/mysql/
total 188552
drwxr-xr-x 23 mysql mysql 4096 Nov 9 20:23 .
drwxr-xr-x 3 root root 4096 Sep 29 12:53 ..
-rw-r----- 1 mysql mysql 56 Sep 29 12:52 auto.cnf
drwxr-x--- 2 mysql mysql 4096 Nov 9 18:35 dbone
drwxr-x--- 2 mysql mysql 4096 Nov 9 18:45 dbtwo
drwxr-x--- 2 mysql mysql 4096 Nov 9 18:49 dbthree
drwxr-x--- 2 mysql mysql 4096 Nov 4 14:27 dbfour
-rw-r--r-- 1 root root 0 Sep 29 12:52 debian-5.7.flag
drwxr-x--- 2 mysql mysql 12288 Nov 9 17:31 dbfive
drwxr-x--- 2 mysql mysql 4096 Sep 29 12:53 homestead
-rw-r----- 1 mysql mysql 895 Nov 5 15:11 ib_buffer_pool
-rw-r----- 1 mysql mysql 0 Nov 9 19:53 ib_buffer_pool.incomplete
-rw-r----- 1 mysql mysql 79691776 Nov 9 20:18 ibdata1
-rw-r----- 1 mysql mysql 50331648 Nov 9 20:18 ib_logfile0
-rw-r----- 1 mysql mysql 50331648 Nov 9 19:11 ib_logfile1
-rw-r----- 1 mysql mysql 12582912 Nov 9 20:23 ibtmp1
...
drwx------ 2 root root 16384 Sep 29 12:53 lost+found
drwxr-x--- 2 mysql mysql 4096 Nov 9 20:23 mysql
...
drwxr-x--- 2 mysql mysql 4096 Nov 9 20:23 performance_schema
...
drwxr-x--- 2 mysql mysql 12288 Sep 29 12:52 sys
...
Let's check how much space my databases take. The following command lists all the databases and their sizes on disk, sorted by size in descending order.
$ sudo du -ch -d 1 /var/lib/mysql/ | sort -shr
9.3G /var/lib/mysql/
9.3G total
4.0G /var/lib/mysql/dbone
1.4G /var/lib/mysql/dbtwo
1.1G /var/lib/mysql/dbthree
833M /var/lib/mysql/dbfour
831M /var/lib/mysql/dbfive
495M /var/lib/mysql/x
376M /var/lib/mysql/xx
133M /var/lib/mysql/xxx
75M /var/lib/mysql/xxxx
25M /var/lib/mysql/mysql
20M /var/lib/mysql/xxxxx
2.2M /var/lib/mysql/xxxxxx
1.1M /var/lib/mysql/performance_schema
676K /var/lib/mysql/sys
16K /var/lib/mysql/lost+found
8.0K /var/lib/mysql/xxxxxxx
8.0K /var/lib/mysql/xxxxxxxx
8.0K /var/lib/mysql/xxxxxxxxx
8.0K /var/lib/mysql/xxxxxxxxxx
8.0K /var/lib/mysql/homestead
8.0K /var/lib/mysql/xxxxxxxxxxx
Next, I thought I should dig into the Homestead provisioning script. Line 372 mentions that the MySQL storage partition is 10GB and can be expanded with lvextend
. Looking at the total disk usage it's clear that I was hitting the limit.
So now I know that I need to increase the size of the partition. Let's go with 20GB. Here's a good explainer on lvextend.
Take 1 What is the logical volume? After some trial-error, I figure it's homestead-vg/thinpool
(get it from the Homestead provisioning script).
$ sudo lvextend -L +10G homestead-vg/thinpool
Size of logical volume homestead-vg/thinpool_tdata changed from 40.00 GiB (10240 extents) to 50.00 GiB (12800 extents).
Logical volume homestead-vg/thinpool_tdata successfully resized.
Take 2 Read some more here.
Actually no, it's homestead-vg/mysql-master
. It comes from homestead--vg-mysql--master
. The previous command just increased the size of my entire Vagrant box. Let's try this again...
$ sudo lvextend -L +10G homestead-vg/mysql-master
Size of logical volume homestead-vg/mysql-master changed from 10.00 GiB (2560 extents) to 20.00 GiB (5120 extents).
Logical volume homestead-vg/mysql-master successfully resized.
Take 3 Extend the logical volume over the partition at /dev/sda1
. Probably this is what I should have done initially.
$ sudo lvextend homestead-vg/mysql-master /dev/sda1
WARNING: Sum of all thin volume sizes (79.29 GiB) exceeds the size of thin pool homestead-vg/thinpool and the amount of free space in volume group (59.29 GiB).
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Size of logical volume homestead-vg/mysql-master changed from 20.00 GiB (5120 extents) to 79.29 GiB (20299 extents).
Logical volume homestead-vg/mysql-master successfully resized.
Oops, I think I might have accidentally increased the size of the MySQL partition to 80GB. Which is cool in my case, I have plenty of disk space on that particular dev machine.
$ sudo fdisk -l
...
Disk /dev/mapper/homestead--vg-mysql--master: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
For some reason the MySQL database partition is still taking 20GB. Hmm... List the logical volumes:
$ sudo lvdisplay
...
--- Logical volume ---
LV Path /dev/homestead-vg/mysql-master
LV Name mysql-master
VG Name homestead-vg
LV UUID rYcEGB-dEB2-xJ4F-i8n4-u1KX-R7CU-xHmMhL
LV Write Access read/write
LV Creation host, time vagrant, 2019-09-29 12:53:09 +0000
LV Pool name thinpool
LV Status available
# open 1
LV Size 79.29 GiB
Mapped size 12.03%
Current LE 20299
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
Finally I found out that I needed to resize the file system so it can use the additional space:
$ sudo resize2fs /dev/homestead-vg/mysql-master
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/homestead-vg/mysql-master is mounted on /homestead-vg/master; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 10
The filesystem on /dev/homestead-vg/mysql-master is now 20786176 (4k) blocks long.
Check the partition size again:
$ sudo fdisk -l
...
Disk /dev/mapper/homestead--vg-mysql--master: 79.3 GiB, 85140176896 bytes, 166289408 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Check the disk size one final time:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 967M 0 967M 0% /dev
tmpfs 200M 7.0M 193M 4% /run
/dev/mapper/homestead--vg-root 18G 5.1G 12G 31% /
tmpfs 997M 8.0K 997M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 997M 0 997M 0% /sys/fs/cgroup
/dev/mapper/homestead--vg-mysql--master 78G 9.3G 66G 13% /homestead-vg/master
vagrant 953G 167G 786G 18% /vagrant
home_vagrant_dbone 953G 167G 786G 18% /home/vagrant/dbone
home_vagrant_dbtwo 953G 167G 786G 18% /home/vagrant/dbtwo
home_vagrant_dbthree 953G 167G 786G 18% /home/vagrant/dbthree
...
tmpfs 200M 0 200M 0% /run/user/1000
Well now it's way bigger than I wanted (80GB instead of 20GB) but at least it should have more space than I'll ever need.
Don't try this at home kids. Or do, rather, as long as you stay away from production and limit it to your Vagrant environment. Keep in mind that starting at Take 1 above I screwed up some steps by running an additional resize or two, which happened because I didn't understand correctly how logical and physical volume resizing works. I still don't 😬 but I was able to correct the problem and continue working.
]]>Fear not, Laravel beginners. Despite having started more Laravel projects than I can count, I'm still tripped out by this error, if I'm not paying attention.
Disclosure My local environment has always been Homestead/Vagrant on a Mac or PC. I've never used Valet, Docker or other methods. As such, these tips may only apply to a Homestead environment.
More often than not, the "No input file specified" error happens when you don't map your local project to the Vagrant folder properly in your Homestead.yaml
file.
To simplify things, I'll just show you the relevant portions of my own Homestead.yaml
.
folders:
- map: ~/source/laravel/my-awesome-project-1
to: /home/vagrant/my-awesome-project-1
- map: ~/source/laravel/my-awesome-project-2
to: /home/vagrant/my-awesome-project-2
sites:
- map: awesomeproject1.test
to: /home/vagrant/my-awesome-project-1/public
- map: awesomeproject2.test
to: /home/vagrant/my-awesome-project-2/public
databases:
- awesomeproject1
- awesomeproject2
For my latest project, for example, I had set up the sites
and databases
sections properly, but because I had a lot of sites and projects, I forgot to scroll to the folders
section towards the top of the file. So when I loaded up the test site's URL awesomeproject1.test
in my browser, I was presented with the always-charming "No input file specified" message.
The fix was to add the relevant entries for map
and to
.
Heads up! After making changes to Homestead.yaml
, if Vagrant is running, just run vagrant reload --provision
to get it to reload and integrate your changes. The --provision
flag also applies if, say, you decide to change the database.
First of all, trying to be clever can bite you. I would suggest not changing or messing with the default Homestead/Vagrant folder structure /home/vagrant/projectname
. I think I did that when I was learning Laravel and it only caused issues.
Next, here's a little insight into how my local folder structure is set up, by looking at my folders
YAML section.
folders:
- map: ~/source/laravel/my-awesome-project-1
to: /home/vagrant/my-awesome-project-1
This tells Vagrant to map the local (Mac, PC, etc) project folder (~/source/laravel/my-awesome-project-1
in my case) to the /home/vagrant/my-awesome-project-1
folder on the Vagrant box.
Your local project structure will very likely differ from mine so take that into account. Mine is a little weird - all my projects go into the source
directory, but inside that I have them grouped up by technology, so there are sub-directories for laravel
, vue
, etc. Yeah, I'm not sure either if this is a smart way to organize projects but it's in my muscle memory so it works for me.
Finally, I would discourage you from changing the Vagrant structure away from /home/vagrant/projectname
. Or if you have to, keep in mind that you'll have to make a corresponding change to the sites
section (and then re-provision the Vagrant box). Here's an example:
Let's say you want to organize your projects in Vagrant inside a projects
sub-directory (why tho?). Then you'd end up with the following config:
folders:
- map: ~/source/laravel/my-awesome-project-1
to: /home/vagrant/projects/my-awesome-project-1
...
sites:
- map: awesomeproject1.test
to: /home/vagrant/projects/my-awesome-project-1/public
...
databases:
- awesomeproject1
Final tip If you're having trouble loading the awesomeproject1.test
URL in your browser, make sure you've configured the test domain properly in your hosts
file. On a Mac you'll find it at /etc/hosts
, while on a PC it's usually at C:\Windows\System32\drivers\etc\hosts
(reason #63421 why I don't like coding on a PC). Edit the file and add a new entry like so:
192.168.10.10 awesomeproject1.test
192.168.10.10 awesomeproject2.test
Homestead is configured by default to run on 192.168.10.10
, it's right at the top of Homestead.yaml
. All your local sites will run on the same IP.
And that's it. Hopefully this will help you get your Laravel project started quicker and with less headache!
]]>Ideally you should aim for your entire site to be performant but sometimes you gotta pick & choose. Under most circumstances, you want to focus on your landing page because that's where most visitors land, followed closely by other popular pages.
In development, as in life, priorities dictate what piece of your app will receive the most attention. While building 1Secret.app I thought I should take a look at the landing page (after a recent revamp) to see how it scores in Google's Lighthouse performance test.
I'm primarily a Mozilla Firefox user but I still prefer to use Chrome's dev tools for various reasons. There is one thing where Chrome bests Firefox, and that is the performance test. You can access that in Dev tools > Audits > Run audits
. It presents you with a bunch of scores (as shown in the hero image above) and a long list of well-documented suggested fixes.
My landing page got a lukewarm 69/100 score for accessibility, so I decided to spend a few minutes to see if there are any quick fixes I can do to improve this score.
When I first started building 1Secret.app I employed Bootstrap but also pulled in TailwindCSS. Needless to say, I soon started having regrets for using Bootstrap but I was too far along to bother removing it. So I used both frameworks in parallel. Which is totally legit BTW. However, performance will suffer as a result, since Bootstrap is very heavy, both in terms of CSS and JS. And not just the framework's own JS, but also 3rd party dependencies such as jQuery, Lodash and popper.js.
One day I decided to take the time and completely remove Bootstrap,. It took a few hours of painstaking work but in the end I shaved off a huge chunk from my CSS/JS bundles.
I thought Bootstrap was ancient history until Lighthouse informed me that I was referencing a chunky bit of JS that was affecting my loading time. Guess what, I had forgotten to remove the reference to Bootstrap's main library, hosted on their CDN. Basically I was still making the request to load the library, despite not using it. Duh.
Luckily that's a very easy fix. I suspect it's what pushed my Performance score from 98 to 99.
One good suggestion that Lighthouse gave me is to lazy load the images below the fold. It's a very good point but unfortunately lazy loading is not yet implemented consistently across all browsers. So I made a halfhearted attempt at it, by adding Chrome's new loading
attribute. I got that from this article.
In my Laravel code this is what such an image looks like:
<img loading="lazy" src="{{ asset('svg/login-chapters.svg') }}" alt="Share secrets with {{ config('app_name') }}">
Unfortunately this doesn't seem to work as I can't see the attribute being rendered by the browser, and Lighthouse still says it's a problem. I'll chalk this down as a failure. I could write some fancy JS to handle this but I hate overcomplicated solutions so I'll wait until proper browser support is implemented consistently.
aria-label
to the site logoMy site's logo at the top left of every page is an image surrounded by an anchor. Because the anchor doesn't contain any text, it is inaccessible. So I added an aria-label
attribute to describe what the link is about. Here's my Laravel before & after snippet.
Before
<a class="nav__logo" href="{{ url('/') }}" title="{{ config('app.name', '1Secret') }}" style="z-index: 1;">
@include('partials.icons.1secret-logo', ['viewBox' => '512 512', 'width' => '42', 'height' => '42', 'class' => 'mr-2'])
</a>
After
<a aria-label="Share a secret with {{ config('app.name') }}" class="nav__logo" href="{{ url('/') }}" title="{{ config('app.name') }}">
@include('partials.icons.1secret-logo', ['viewBox' => '512 512', 'width' => '42', 'height' => '42', 'class' => 'mr-2'])
</a>
I also happened upon another problem here which was two nested anchor tags that made no sense. That's what happens sometimes when you review your own code.
title
to the main nav hamburger menuSimilarly, my main navigation contains a hamburger menu that renders on mobile viewports. That menu is a button containing an SVG image so there's no descriptive text for accessibility. Adding title="Main menu"
takes care of the problem.
These tiny tweaks were enough to boost the landing page's Accessibility score from 69 to 95.
I won't pretend that 1Secret.app is fully accessible or maximally performant across all pages but this is a start. More importantly, this little exercise showed once again that following even a few of Lighthouse's suggestions can make a pretty significant impact on your site's performance and accessibility, which in turn has the potential to boost it higher in Google's search rankings.
]]>Download your desired PHP version from the official repository. At the time of writing this, I used PHP 7.3 VC15 x64 Non Thread Safe
.
Unzip the zip archive, rename the folder to something like php
or if you want to have multiple versions of PHP, php-7.3
, and move it to your C:\
folder.
Open your bash terminal of choice. I use Zshell.
I maintain a separate .aliases
file in my home folder.
Add a new entry (or change the existing alias) for the new PHP executable.
alias php="/mnt/c/php-7.3/php.exe"
In your .bashrc
or .zshrc
make sure this line exists:
source ~/.aliases
Restart your terminal (or run source ~/.bashrc
or source ~/.zshrc
) and check the PHP version:
$ php -v
PHP 7.3.8 (cli) (built: Jul 30 2019 12:44:08) ( NTS MSVC15 (Visual C++ 2017) x64 )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.3.8, Copyright (c) 1998-2018 Zend Technologies
php.ini
configurationThere are likely a few things you might want to configure in php.init
to bring it in line with your previous config, or to make it run properly. Here are some of the things I do.
In the php folder, copy php.ini.development
to php.ini
.
Increase memory_limit
to 1G
.
Increase post_max_size
and upload_max_filesize
to whatever works for you, typically higher than the default 8M and 2M, respectively. I typically set my upload_max_filesize
to 64M.
Uncomment the line ;extension_dir = "ext"
by removing the ;
.
In the Dynamic Extensions
section enable the following (YMMV - no need to enable all database extensions):
extension=curl
extension=fileinfo
extension=gd2
extension=mbstring
extension=openssl
extension=pdo_mysql
extension=pdo_pgsql
extension=pdo_sqlite
extension=sockets
extension=sqlite3
extension=xmlrpc
This should be sufficient to allow you to run PHP in your Windows (bash) terminal of choice. You can switch versions either by re-aliasing the php
command, or by creating version-specific aliases (for example php72
).
I've been working on a SaaS product and I'm close to an official launch, but based on preliminary user feedback I decided to rebrand the site to a new domain name. The previous domain was rather unfortunate - sikrt.com - what I thought was catchy but ultimately proved to be confusing and hard to remember.
Luckily it's a lot easier to rebrand when you haven't yet launched. So I found an objectively much better domain - 1secret.app and I embarked on the arduous journey of moving the old site over to the new domain. The process itself is not incredibly complicated but, for a dev who prefers not to deal with devops, I ran into a couple sticking points.
My sites are deployed with Laravel Forge and hosted on Linode. I also use a few 3rd party services such as Google Analytics & Recaptcha, and Mailgun.
Here, I'm documenting the steps I took to migrate to the new domain.
New site
Create the new site by going to Servers, then adding a new Root Domain. In my case that would be 1secret.app
.
Attach the repo
Next, attach the Github/Gitlab/etc repository where your site's code is located.
Copy environment
There's one more thing to do, which is to copy the old environment (.env
) to the new site. For a Laravel project, that would be everything below the DB section since I'm keeping the same database, although you should change that as well if you decide to go with a new database.
Keeping the same database allows me to skip migrating the new one and importing the old data into it.
My domain registrar is Namecheap.
Assuming you've already bought the new domain, click your domain, then Manage > Advanced DNS > Add new record.
Create two new A Record records:
Type | Host | Value | TTL |
---|---|---|---|
A Record | * | 104.200.17.161 | Automatic |
A Record | @ | 104.200.17.161 | Automatic |
Note that 104.200.17.161
is the IP of 1secret.app
and you can get that either by pinging the domain or from Forge (under Sites).
Delete the existing CNAME Record and URL Redirect Record that were created automatically by Namecheap. These would be:
Type | Host | Value | TTL |
---|---|---|---|
CNAME Record | www | parkingpage.namecheap.com. | 30 min |
URL Redirect Record | @ | http://www.1secret.app/ | Unmasked |
Now wait for DNS to propagate (up to 48 hours, usually takes a lot less, perhaps 30 min).
Set up SSL
Once the DNS has propagated, it's time to set up SSL.
Go to Sites > 1secret.app > SSL > LetsEncrypt > Obtain Certificate. Let Forge do its magic.
**(Optional) Create a new database **
If you decide to use a fresh database, or change the name, go to Servers > Database > (dbname) > Add Database.
In your favorite DB client export the old DB, then import it into the new DB.
Start the queue worker
Sites > (1secret) > Queue > Start Worker (with default values).
Start the scheduler
To start Laravel's scheduler we need to create a new scheduled job. Go to Servers > Scheduler
and add the following command (that would be the standard path where the site is located on a Forge-provisioned server):
php /home/forge/1secret.app/artisan schedule:run
Otherwise leave defaults in place, then click Schedule Job
.
(Optional) Google services keys
In my case I use Google Recaptcha and Analytics, and I need to update the keys for the new domain. After creating a new set of keys I'll update them in .env
.
Enable quick/custom deployment
My deployment setup is fairly basic: automatic deployment happens whenever I push a new tag to Gitlab where my code is hosted. Forge has a few different hooks, including on each remote push. I prefer my deployments to be more predictable (and I commit/push often) so I like the middle ground of auto-deploy on tag + manual whenever I want.
Under Sites > Apps > Turn on Quick Deployment.
Next, copy the Deployment Trigger URL because we'll need to add that to Github/Gitlab.
Jump quickly to your git provider. In my case that would be Gitlab > Settings > Integrations.
I use Mailgun to send transactional emails from 1secret.app
. That needs to be set up with my domain registar (Namecheap).
A little rant first. Getting email to work properly has been the bane of my existence. Both Mailgun and Namecheap give slightly contradictory instructions and I was forced to find my own settings that seem to work. And yet, I still have a vague suspicion that perhaps I didn't do this perfectly. Oh well, let's dive in.
In Mailgun
Add a new domain: Settings > Domains > Add New Domain.
Domain name: mg.1secret.app
/ US
Check Create DKIM Authority and select 2048.
In Namecheap
Add a new DNS record. In the end these are the settings that worked for me:
Type | Host | Value | TTL |
---|---|---|---|
CNAME | email.mg |
mailgun.org |
Automatic |
TXT | mg |
v=spf1 include:mailgun.org ~all |
Automatic |
TXT | smtp._domainkey.mg |
k=rsa; p=... |
Automatic |
Mail settings -> Custom MX
Type | Host | Priority | Value |
---|---|---|---|
MX | mg |
10 |
mxa.mailgun.org |
MX | mg |
10 |
mxb.mailgun.org |
Once you've set these up, click Verify DNS Settings. This can take 24-48 hours to propagate but for me it was instant once I arrived at the correct set of values.
Go back to Forge and update .env
with the Mailgun credentials:
MAIL_USERNAME=...@mg.1secret.app
MAIL_PASSWORD=...
MAIL_FROM_NAME=1Secret
MAIL_FROM_ADDRESS=hello@1secret.app
To clear the previous values from the config/env cache, reboot the server. I also run php artisan optimize:clear
as a deploy task.
Now that I've got everything set up, I want to redirect all the traffic from sikrt.com
to 1secret.app
. The reason for that is I'm letting the sikrt.com
domain expire, but until that happens, I want a permanent 301 redirect
on it, to 1secret.app
.
I tried several redirect methods (in Namecheap and Forge), but ultimately what worked for me was to add a forced redirect directly in the Nginx configuration.
You can do this manually on the server or just use Forge itself to edit the Nginx configuration for the old site: Sites > sikrt.com > Files > Edit Nginx Configuration.
Insert this line return 301 https://1secret.app;
as shown below:
# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/sikrt.com/before/*;
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name sikrt.com;
return 301 https://1secret.app;
root /home/forge/sikrt.com/public;
Assuming everything went correctly, from now on, when someone loads sikrt.com
they'll be redirected to 1secret.app
. Another benefit is that, due to this being a permanent redirect, Google knows not to penalize your site(s) for duplicate content.
This concludes my fairly convoluted procedure for migrating a site that was provisioned with Forge from an old domain to a new one. It's not fun moving sites to new domains but luckily it's not a thing that needs to be done very often.
I set out to document the procedure in as much detail as possible but the various frustrations that popped up while I was doing it put a damper on that. I apologize in advance for any errors but if you do find any please drop a line . I also kind of regret not taking screenshots but those would have made the whole thing even longer and I just wanted to get through it.
]]>I've been using Laravel Forge for over a year at my day job, but also to provision and deploy my side project 1Secret.app. To me, the biggest benefit that Forge brings is the ability to easily and quickly provision Laravel-ready server instances, whether on AWS, DigitalOcean, Linode or others.
My server OS of choice is Ubuntu, and Forge has been doing some sort of magic to keep it updated to the latest version. This means I'm currently running 18.04 on multiple instances. This is all good, however there's still some maintenance that I need to perform manually from time to time, namely OS security patch and package updates. I also like to keep an eye on disk space and clear some of that if necessary.
If, when you SSH into your Forge instance, you see a message like below...
14 packages can be updated.
1 update is a security update.
*** System restart required ***
Last login: Sun Sep 8 22:09:04 2019 from xxx.xxx.xxx.xxx
... that means it's probably time to update those packages. Here's my procedure for doing that, bearing in mind that I'm not a sysadmin, and everything you read below was cobbled together from various sources but works 👍 for me.
In my case, 1Secret.app is served from 104.200.17.161
so my command will be (id_rsa
is my private SSH key):
ssh forge@104.200.17.161 -i ~/.ssh/id_rsa
File system
The file system can easily fill up with stuff like uploaded files, logs, database, etc. First I like to see an overview of the total disk usage.
df -h
Filesystem Size Used Avail Use% Mounted on
udev 985M 0 985M 0% /dev
tmpfs 200M 816K 199M 1% /run
/dev/xvda1 20G 7.1G 13G 37% /
tmpfs 996M 0 996M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 996M 0 996M 0% /sys/fs/cgroup
/dev/loop0 18M 18M 0 100% /snap/amazon-ssm-agent/1068
/dev/loop2 18M 18M 0 100% /snap/amazon-ssm-agent/930
/dev/loop4 18M 18M 0 100% /snap/amazon-ssm-agent/1335
/dev/loop1 89M 89M 0 100% /snap/core/7169
/dev/loop5 89M 89M 0 100% /snap/core/7270
tmpfs 200M 0 200M 0% /run/user/1001
In this example, the important line is /dev/xvda1 20G 7.1G 13G 37% /
because that is my primary disk. Here, it is 37% full out of a total of 20 GB, which is good.
Application storage
Laravel apps store stuff and things in myapp/storage
. If you find a lot of disk space is consumed by the storage folder, refering to my Useful Linux Commands article, you can run something like this to check which subfolder takes the most space:
du -ch -d 1 | sort -hr
760K total
760K .
704K ./framework
28K ./logs
16K ./app
8.0K ./debugbar
In this example there's almost no space used. Let's move on.
Journal size
Another place where a lot of storage can potentially be used is the system journal. This is a where Linux stores a lot of logging data, for example system events and such. It tends to grow in size over time. Depending how important this data is to you, you can choose to delete some or all of it, or restrict how much space it can use.
Here's how I would check how much space the system journal uses on my Ubuntu 18.04 instances:
du -ach /var/log/journal/ | sort -hr
1.8G total
1.8G /var/log/journal/1302ef9b7d514d588b562228feb06a4c
1.8G /var/log/journal/
81M /var/log/journal/1302ef9b7d514d588b562228feb06a4c/system@2b1e5536b8964276bd01478033377b9b-000000000017bdd9-00058b9e4032a3be.journal
81M /var/log/journal/1302ef9b7d514d588b562228feb06a4c/system@2b1e5536b8964276bd01478033377b9b-0000000000167546-00058adae74a631a.journal
...
41M /var/log/journal/1302ef9b7d514d588b562228feb06a4c/system.journal
8.1M /var/log/journal/1302ef9b7d514d588b562228feb06a4c/user-1001@f935c142f48041da86bb9920da4f84de-000000000003acdb-0005815031482878.journal
...
8.0M /var/log/journal/1302ef9b7d514d588b562228feb06a4c/user-1001.journal
8.0M /var/log/journal/1302ef9b7d514d588b562228feb06a4c/user-1000.journal
1.8 GB may not seem much, but when your entire instance is 20 GB, that's actually quite significant.
Clear journal entries manually
To recover disk space, journal entries can be cleared manually in a couple ways.
Retain only the past two days:
sudo journalctl --vacuum-time=2d
Retain only the past 500 MB:
sudo journalctl --vacuum-size=500M
Restrict max journal size
The journal size can be restricted through the configuration.
sudo vi /etc/systemd/journald.conf
Set SystemMaxUse=500M
to restrict it to 500M.
Restart the systemd-journald
service (see this for more details):
sudo systemctl restart systemd-journald
Clear /usr/src/
CAUTION Working with AWS instances, as well as S3, I found lots of linux-aws-headers-*
files in /usr/src/
. Based on my research, these should be safe to delete, which I did without negative consequences, but you should be extra careful just in case I'm wrong.
To clear the AWS-specific files out of /usr/src/
, run this command:
sudo apt-get purge linux-aws-headers-4.15.0
Finally we're ready to update the system packages. The commands below can be run in sequence to upgrade all the packages. You can skip the list
commands if you wish, those are just to give you an overview of what packages there are.
You may see a prompt asking if you want to upgrade certain packages or configurations. That's where you need to be extra careful because it may overwrite your custom configurations. In my case, I usually get two prompts, for Redis and php.ini.
Update Redis? Y
Update php.ini? N
(keep the local version currently installed)
# updates available list of packages & versions
sudo apt update
# lists the installed packages
sudo apt list --installed
# lists the packages that can be upgraded
sudo apt list --upgradeable
# actually perform the package upgrades
sudo apt upgrade
# removes packages that are no longer required
sudo apt autoremove
Finally, reboot the server.
sudo reboot
Once the system has rebooted, SSH back into it. You should be greeted with this shiny new message:
0 packages can be updated.
0 updates are security updates.
Now check if your vital services are running. In my case there are only 3 I care about:
systemctl status systemd-journald supervisor redis
If these are green (Active: active (running)
), you are good to go.
# Starts the Nginx service
sudo systemctl start nginx
# Stops the Nginx service
sudo systemctl stop nginx
# Stops then starts the Nginx service
sudo systemctl restart nginx
# Gracefully restarts the Nginx service
sudo systemctl reload nginx
# Shows the status of the Nginx service
sudo systemctl status nginx
This concludes my maintenance procedure for Laravel Forge provisioned servers. I will update these instructions as I see fit, but in the meantime keep on forging ahead!
]]>In a nutshell, 1Secret is a browser-based app that is trying to solve the problem of sharing secure or sensitive data over insecure mediums. The best example of that is sending passwords over email. That is an inherently insecure practice and I wish IT professionals would use it less.
Recently, I discovered that Outlook can generate and send encrypted links, but the process to open such a link is awkward, and I can only imagine creating one is just as inconvenient. Besides, you're tied to a particular technology, Outlook in this case.
1Secret aims to make this process a lot easier, but also very secure.
1Secret's main premise is the creation of transient - or short-lived - secrets. A "secret" is simply text or a file or both (later this will be expanded to multiple files and other goodies, but for now I'm keeping it simple).
Once a secret is created, you'll be presented with a URL that terminates in a random string. Instead of emailing a clear-text password, you'll be emailing this URL instead. The advantages of this should be clear once I explain security layers below.
Creating a secret is kept simple, with most fields optional.
Of note are the Duration and what I call Attempts but also Password. The first two control how the secret expires (and is subsequently destroyed), while adding a password encrypts the secret with your own key, making it airtight.
To open a secret, load the URL in a browser. Depending if it is encrypted by the creator, you may be presented with a password prompt.
Once you've entered the correct password - if applicable - you'll see the message or file attachment.
If the file is an image there's a low resolution preview, but you can download the original image if you wish.
The message itself can be read, copied to the clipboard or downloaded as a text file.
1Secret is a bit like an onion, in that it offers multiple layers of security. Here are some of them.
The message (along with any files) is transported securely over HTTPS to the server where it is processed.
By not allowing a secret to live forever, I don't have to worry about forgetting that my secrets are spending the remaining of eternity on some server.
Transience is achieved in two ways: duration (or lifetime) that defaults to a sensible number and you can tweak it to your liking based on account type, and number of attempts (I really need to find an easier term for this) which determines how many times the secret URL can be opened before it is destroyed.
In both cases, once the secret has expired, it is purged (or destroyed) from the server and the URL becomes invalid.
Every secret is encrypted on the server by default. This automatically protects the data at rest from being compromised. An attacker would need to compromise two separate pieces of the puzzle in order to decrypt your information. But this is where layer 2 comes in.
Optionally (but VERY HIGHLY RECOMMENDED) you can encrypt a secret with your own password or key. Just make sure not to use your account password! This will wrap your secret around another encryption layer, but this time you control the key. No one else - not I (the provider), nor an attacker - will be able to access your information without the password. Also, the longer the password is, the better.
"Security through obscurity" is a term that refers to obfuscating something, or making it long, complex, random, hidden from casual inspection, or a combination of all of those. In general I don't like this practice. It has its place in certain situations but it should never be the only security measure used.
In this case it adds yet another thin layer on top of the existing security onion. This is achieved by the random URL string that is generated when you create a secret, and looks like this https://1secret.app/s/h5y85u4x
. When the secret expires, this URL is gone forever.
Here's a handful of the most used scenarios that I run into on a daily basis. For others, use your imagination.
Remember, the main benefit of using 1Secret is that each secret has a very limited lifespan, so you can "fire and forget it" and remain confident that it will be automatically destroyed when it reaches its end of life.
Create a password-encrypted secret, and email the 1Secret-generated URL instead. Then pass on the password through a separate medium, such as Slack, SMS, word of mouth, etc.
Always spread the risk by using separate mediums to transfer the URL and the password.
Yeah, you can use tools such as Dropbox, Google Drive and so on, but all those require a common interface (i.e. the program needs to be installed on both devices, you need to be signed in and so on). 1Secret needs only a browser, which is probably the most common and ubiquitous cross-device interface you'll find.
Besides, I don't have to worry about temporary bits and pieces of data cluttering my devices, since they only live briefly in the cloud.
For that reason, I use 1Secret a lot to transfer text and files between my desktop(s)/laptop(s) and phone.
A good example is setting up a crypto wallet on a new device. In that case I'll just create a very short duration secret (say, 5 minutes) containing the encryption seed from my main machine (password-encrypted of course), then log into 1Secret on my new device and copy/paste the seed into the new wallet. In this case the password remains in my head.
Another (slightly weird) scenario is for tweeting. Yes, tweeting. Bear with me.
Often I find myself composing a tweet on one device, perhaps processing a screenshot, but I'm unable to tweet it on that device because I'm not logged into my account for various reasons (let's say I'm on a public device). So I'll create a secret containing the tweet and any associated image, then open it on a device from where I can actually send the tweet.
Yes! Check out the Pricing page where you'll find a list of features that each tier offers.
The Free/Standard tier offers most of the functionality you'll need on a casual basis from such a service: password encryption and the ability to add smaller files.
Premium will cost in the ballpark of $10 / month and will offer more conveniences, longer durations, more generous storage, and so on.
Please note that you'll need to create an account in order to use the password encryption or file attachment features. I'm not doing this in order to harvest your email address (read the Privacy Policy to see how little data I collect about you), but rather to prevent abuse.
For now, as you'll read below, I'm offering all the Premium features for free for the duration of the beta to anyone who signs up.
I don't have a payment solution in place but when the time comes I will support credit cards as a baseline. You can also pay me directly with Paypal if you prefer, or even cryptocurrency.
Glad you asked! I put quite a little bit of thought into what I want to build next and made a roadmap.
If you happen to come across this article and are intrigued by the idea, give 1Secret a spin! As of this writing, and until further notice, it is in open public beta and everyone who signs up gets access to all the Premium features for free, until the beta ends.
Plus, as early adopters I will offer you a special discount when the service goes commercial.
Thanks and I hope you'll find 1Secret useful!
]]>Initially I meant to use Omigo.sh as a central hub - or an umbrella organization if you will - for my side projects. I also blogged from the same site, but there was no personal information about me. Over time I decided I was going to be more transparent about who I am and what I do, so I made chasingcode.dev as a hybrid portfolio/resume website.
Once I made that decision, it made sense to transfer the blog over. Now all my articles from Omigo.sh are permanently redirected here.
If you skipped the home page, check it out and learn more about me. I'm planning more content aside from the blog itself, to expand and dive deeper into what makes me tick.
Thanks for visiting and I hope you'll find something useful here!
]]>Let's say you have a simple form with a plain old submit to the server without Ajax. Sometimes the process take up to a second or even more, depending on the payload. Obviously if you're sending a file it will take longer. During that time, it is possible for the user to hit the submit button multiple times, whether by accident or intentionally. In that case, the server will receive multiple submissions of the same form data.
A solution that I've employed often to ensure the form is submitted only one time - not because it's the best but because it's the quickest technique to reach for - is to disable the submit button once it's clicked. Since this is a server-side request, I am not too worried that the request will fail and the button will remain disabled.
Give the following HTML for our form:
<form method="post" action="/">
<button id="butt" type="submit">Submit</button>
</form>
We could use JavaScript to disable the button like so:
var butt = document.getElementById('butt');
butt.addEventListener('click', function(event) {
event.target.disabled = true;
});
The sequence of events goes something like this: the user clicks the button → the button is disabled → the form gets submitted → the server handles the request → it redirects wherever it is meant to.
That should be the end of the story. But wait, there's more!
Unfortunately this little snippet does not work consistently across browsers. As of this writing, I tested this in the desktop versions of Chrome 75, Firefox 67, and Safari 12.
In Chrome or Safari, clicking the button will disable it but NOT submit the form. In Firefox, the behavior is as expected: click - disable - submit.
Try it out for yourself on Codepen. If it's not immediately obvious what happens, in Chrome/Safari, after the button is disabled, it remains on the screen (meaning the form wasn't submitted). In Firefox, it is disabled and then it disappears (meaning the form was submitted).
What does one do when confronted with a situation like this, and they're not a JavaScript grandmaster? Well, reach for the ol' setTimeout
trick, of course.
I suspect this situation has something to do with JavaScript's async nature, yada yada (correct me if I'm wrong). To break out of that behavior for this situation, and make the sequence of events synchronous, just wrap the offending code in a setTimeout 0
statement, like so:
var butt = document.getElementById('butt');
butt.addEventListener('click', function(event) {
setTimeout(function () {
event.target.disabled = true;
}, 0);
});
Or 1337 ES6 1-liner (I'm sure someone will find an even shorter way to write this):
document.getElementById('butt').addEventListener('click', event => setTimeout(() => event.target.disabled = true, 0));
Codepen for both long form and 1337 version.
Now the behavior is consistent in all browsers: click → disable → submit. Say what you will, but I've used this trick often when I run into similar situations and it works without fail.
]]>One technique is to fetch the UTC time from the server and convert it to client time with JavaScript, using the browser's built-in APIs. Here's how.
var serverTime = '2019-07-19 17:04:03';
// split into components by "-", " ", ":" and convert to integer
var splitIntoComponents = serverTime.split(/-|\s|:/).map(c => parseInt(c, 10)); // [2019, 07, 19, 17, 04, 03]
var date = new Date(Date.UTC(...splitIntoComponents));
date.toLocaleDateString(); // "8/19/2019" <-- this is because Date.UTC month parameter is 0-index based
date.toLocaleTimeString(); // "12:04:03 PM"
]]>TL;DR An important piece of functionality broke because I added a new Vue component that happened to have the same name as an existing component, even though both were located in different directories.
Sikrt.com is built on Laravel, with a sprinkling of Vue here and there. These days, Laravel projects make it incredibly easy to embed Vue components by allowing you to register each and every component globally in resources/js/app.js
.
The relevant piece of code that does that is the following:
const files = require.context('./components', true, /\.vue$/i);
files.keys().map(key => Vue.component(key.split('/').pop().split('.')[0], files(key).default));
Basically this loops through all the Vue single file components (.vue
extension), including sub-folders, and registers them. That way you can just refer to the component anywhere in your Blade templates and won't need to import them explicitly. Furthermore, inside each component you can include any other component without importing it or registering it in the parent component. So it makes it really convenient to work with Vue.
One obvious downside is that it makes it much harder to split your code. Personally I've done it on another project (separate bundles for admin and regular users) but it's not provided out of the box. What this means is that you're essentially loading your entire bundle on every page of your Laravel app. At the end of the day it's not a huge deal if you are mindful of your bundle size.
The other downside that I just encountered is that apparently you can't have two components with the same name, even if they're in different folders. The code snippet above, which performs the registration, basically flattens out the entire folder hierarchy inside the components
folder.
In my specific case, I initially had the following component, which provided the important piece of functionality that I mentioned: components/VMenu.vue
.
During the changes (mostly front-end) that I made, I added another component with the same name, located at components/icons/svg/VMenu.vue
. This component was a new SVG icon that I added to the project, following the pattern I discussed a while back.
The name of this new component is important because I follow a very strict naming convention for SVG icon components: "V", followed by the file name of the original icon in PascalCase. I am partial to Feather Icons these days. So for example, their arrow-right
icon becomes VArrowRight.vue
when I import it into my Laravel/Vue project.
Just like that, my very important functionality no longer worked. And I had no idea why, since Yarn didn't throw any errors upon compiling, nor were there errors in the console.
After trying out different things, I thought I would build the original VMenu.vue
component (and its parent) from scratch, bit by bit. Eventually I discovered that if I renamed it to something else, it worked. And then the 💡 went off. My feelings were very confused: on the one hand I felt like a dumbass, on the other I felt victorious that I restored that important functionality (which incidentally I spent many hours perfecting it).
There you go: make sure you don't name the Vue components in your Laravel project the same, if you're using the global import technique. But then again, you can always import each component individually.
]]>Basically these fall somewhere between "man, if I just had an extra 8 hours a day, I'd totally give this a whirl" and "I wouldn't be opposed to working with this technology", and even "there's a good chance I might switch to / adopt this at some point in the future".
So let's get started.
Svelte is a promising new JavaScript framework that has come out of nowhere and might well be a game-changer.
Because I recognize some of the speed and bundle size advantages of Svelte. Going through their examples and interactive tutorial, it seems easy enough to learn, on par with Vue.JS. At version 3.x at the time of this writing, it is also mature enough that it can be strongly considered for production.
The Rethinking Reactivity video by Rich Harris, Svelte's creator, was kinda of an eye-opener and really brought it front-and-center for me.
While I'm not about to abandon VueJS for Svelte, it's worth keeping close watch on it, and I'm very interested in the direction it's going to evolve. I am itching for a spare moment between my day job and all my side projects to give it a spin.
Since I haven't dug into it a whole lot, I don't want to speak out of turn but my understanding is that the tooling around Svelte is not yet mature enough or at the level of Vue or React's ecosystems. For example, a lot of folks seem to have had issues trying to use SASS with Svelte.
Svelte is, in 2019, one of my top contenders for new technologies that I find very promising. Time will tell if that is the case.
GraphQL is a language for querying your API.
While it has been around for a few years, GraphQL was treated as more experimental than anything, though it has been picking up steam, and deservedly.
A while back I worked on a project that made use of a GraphQL API and it took a while to wrap my brain around it. Only much later did I realize the benefits. What I like best about it is that it greatly simplifies your API endpoints and allows complex querying directly from the front-end.
If I needed to build a Single Page App (SPA) driven by an API that I control, I will almost certainly create a GraphQL API server.
NativeScript is without doubt in my mind, one of the hottest technologies that allow a JavaScript dev to build cross-platform mobile apps.
Web development is hard enough on the desktop. But when we entertain the possibility of building native mobile apps for our product or service, it feels like a lost cause. NativeScript seeks to mitigate a lot of that pain, by making it easy for non-mobile developers to build mobile apps, with the technology they already know.
As a Vue developer, I'm happy that NativeScript officially supports Vue. I skimmed the documentation and it seems pretty thorough. More importantly, there's a sandbox environment that you can use to quickly build a demo app that actually runs on your phone!
Now, building a cross-platform mobile app with NativeScript + Vue is definitely more involved than simply building a Vue web app, but this is a light at the end of the tunnel for those like me who don't have the time/resources/energy to learn how to code for Android and iOS. `` If I needed to build a mobile app, I would very likely use NativeScript as my first choice.
Cypress.io is the new hotness in UI/browser testing.
I'm a big proponent of testing in general so anything that makes this easier is an instant win. There's been a lot of hype around Cypress lately and from what I've been told, it's the new gold standard in front-end testing.
Until recently, Selenium has been the goto end-to-end testing framework, but there are a lot of problems with it and it just doesn't make developer's lives easier. Apart from being slow and cumbersome, it also requires learning a new language/paradigm (Java) - and in many cases dedicated personnel.
Cypress eliminates all these drawbacks and more. Very importantly, it uses JavaScript, which can eliminate the need for a dedicated tester with Java knowledge and allow any developer to write their own tests. More benefits are explained here.
Honestly, the only thing that prevents me from using it right away is the fact that the projects I'm working on are in continuous flux, meaning the UI and behavior change often, meaning I just don't have the bandwidth to create end-to-end tests in addition of the back-end tests I usually write.
But then again, it all depends what kind of projects you're working on. For quick prototyping and MVP-style products, writing too many tests can be detrimental. Once the product becomes more established, that could be a good time to employ a tool such as Cypress.
Ruby on Rails is a modern application framework for web apps. I've never used Ruby and the syntax looks odd compared to PHP. I've included it not because it's a new or emerging technology, but because recently I've become more interested in it.
My love for Laravel is limitless for the time being, yet Rails feels very much like a spiritual predecessor. I've listened to DHH, the creator of Rails, many times on various subjects and everything he's said so far resonates with me, including his philosophy behind Rails.
Taylor Otwell, Laravel's creator, has imbued his framework with a lot of the same underlying principles as Rails: ease of use, getting things done, developer happiness, beautiful API, great documentation, and on and on. It leads me to believe that programming in Rails would be just as satisfying as Laravel.
Finally, as a petty reason, DHH is a co-founder at my favorite tech company Basecamp, whereby the product is naturally built in Rails. In a parallel universe I can see myself working there.
SwiftUI is the latest application framework revealed by Apple this year.
While I'm not an iPhone user (I do use a Mac for coding and I wouldn't have anything else), SwiftUI brings a lot of good vibes and seems to have developers very excited. It claims to offer a unified way to built apps for the entire Apple ecosystem which is always a good thing in my book. Yet more proof that modern tools continue to get better and make developers' lives easier as we go along.
If I had the need or urge to code anything exclusive for Apple's ecosystem, SwiftUI would be my first choice.
Gatsby is a static site generator based on React.
I'm a big fan of static site generators. Omigo.sh itself is a statically-hosted site. Gatsby is fairly new but has started to become more popular and, if I'm not mistaken, is the first choice for anyone wishing to build a static site with React. I've never touched React but it is the most popular front-end framework at the moment, and I am at least marginally interested in learning it.
Gatsby uses some interesting technologies, such as intelligent prefetching of resources and GraphQL. It's unlikely I'll use it anytime soon but I'm definitely keeping an eye on it.
.NET Core is an open-source (gasp!) framework from Microsoft.
I never saw myself as being interested in Microsoft's stack, simply because I love open-source so much. Over the last few years, however, Microsoft has surprised everyone by embracing open-source with (dare I say?) a vengeance.
Between VSCode, Github, and now .NET Core (as well as others that I don't currently recall), Microsoft is going full steam ahead on open-source technologies. That's very commendable and I hope they keep it up.
I recently heard about .NET Core from an old friend who mentioned he's using it at work. When he told me it was open-source, it immediately piqued my interest.
.NET Core is pretty far down my list of technologies but, based on everything I know, I wouldn't be opposed to learning it if the opportunity presented itself.
]]>To iterate, Omigo.sh is built on TightenCo's Jigsaw static site generator, which is itself built on top of Laravel. Starting today, you can freely peruse the code and customizations I made to the site and blog. Omigo.sh on Github.
This was a very obvious move, and I only regret not opening it up from the beginning. One day I had a "doh!" moment and realized that there's no benefit in keeping it in a private repo, but everything to gain by making the sourcecode freely available.
]]>// input
$item = [
'name' => 'Item name',
];
$year = $item['year']; // Undefined index: year
If I try to reference a non-existent key and don't handle that properly, I'll get this nice little error, and my code will break:
PHP error: Undefined index: year on line x
So let's assume that I want to handle this by assigning null
to an undefined value or index. This can be done a few different ways.
Option 1
Long form. I've seen this a lot in older code bases (especially pre-7.0) and I just don't like it. It's too lengthy and awkward.
$year = isset($item['year']) ? $item['year'] : null; // null
Option 2
Null coalescing operator (??). Way cleaner and much more elegant. PHP 7.0+.
$year = $item['year'] ?? null; // null
Option 3
A more graceful approach with error reporting suppression. I haven't used PHP's @
(error control) operator in a long time and had almost forgot about it. Frustrated with the verbosity of error handling in an older PHP project that did not have access to null coalesce, I discovered this much shorter syntax and it does exactly what I need.
If you want to assign anything but null
, this method won't work, of course.
$year = @$item['year']; // null
NB Test this well to ensure it works in your local and/or production environments. I haven't found any issues in any of mine, but caveat emptor. Also, if you have custom error handling/reporting in place, this might not work. Always test your code when trying this method!
Bonus
To expand on this, let's say I want to assign a default year, if the year in the input is not defined. Using the long form approach, I could do the following - and it's messy and hard to follow:
$current_year = 2019;
$year = isset($item['year']) ? $item['year'] : (isset($year) ? $year : $current_year); // 2019
Using the error control operator it can be simplified to:
$year = isset($item['year']) ? $item['year'] : @$current_year; // 2019
Or even further for PHP 7.0+:
$year = $item['year'] ?? @$current_year; // 2019
]]>firstOr()
.
It first came to my attention through this tweet. It looked intriguing so I fell through the rabbit hole and did a little bit of my favorite new sport: source diving through Laravel's codebase.
firstOr()
do?This method seems to have been introduced in Laravel 5.4 and it can be found in the Eloquent/Builder class vendor/laravel/framework/src/Illuminate/Database/Eloquent/Builder.php
. It should look familiar because it is similar to sister methods such as first()
and firstOrFail()
. Unlike firstOrFail()
, it executes a callback when it doesn't find a result, which can be a very powerful feature depending on your use case.
The first parameter is an array of columns that you wish to extract from your query (if it finds results). The second parameter is the callback I mentioned.
Let's see this in action. Here are a few examples I put together illustrating how you might use this.
$r = App\User::where('id', 1)->firstOr(['name', 'email'], function () {
return response()->json([
'message' => 'This user does not exist.',
], 404);
});
Illuminate\Http\JsonResponse {#3474
+headers: Symfony\Component\HttpFoundation\ResponseHeaderBag {#3480},
+original: [
"message" => "This user does not exist.",
],
+exception: null,
}
$r = App\User::where('id', 1)->firstOr(['name', 'email'], function () {
throw new \Exception('This user does not exist.');
});
Exception {#3388
#message: "This user does not exist",
#file: "...\vendor\psy\psysh\src\ExecutionLoopClosure.php(55) : eval()'d code",
#line: 2,
}
$r = App\User::where('id', 1)->firstOr(['name', 'email'], function () {
logger('This user does not exist.');
});
Then in laravel.log
you will see:
[2019-06-02 21:14:03] local.DEBUG: This user does not exist.
A successful query returns an Eloquent collection object.
App\User {#3441
name: "Mr. Leon Muller",
email: "maiya57@example.net",
}
If you want the array representation, you can chain toArray()
, of course:
$r = App\User::where('id', 1)->firstOr(['name', 'email'], function () {
throw new \Exception('This user does not exist.');
})->toArray();
But there is an alternative:
$r = App\User::where('id', 1)->firstOr(function() {
throw new \Exception('This user does not exist.');
})->only('name', 'email');
// result
[
"name" => "Mr. Leon Muller",
"email" => "maiya57@example.net",
]
]]>$this->withoutExceptionHandling();
This is particularly useful in situations which trigger a 500 Server Error
that doesn't offer much context. Of course, you can dig through the logs, but it's a lot quicker to be able to see the error output when you're running the test.
I've been using this technique for a while now but it hadn't occurred to me that it matters where you place the call within your test.
Here's the scenario that brought me to this realization. Imagine that I am using TDD to test logic for a simple blog, more specifically that I can create a new post. And this is how I would write a basic test in Laravel for this functionality.
/**
* @test
*/
public function as_an_authenticated_user_i_can_create_a_post()
{
$this
->post('/posts/store')
->assertRedirect('/login');
$body = [
'title' => 'A new post',
'contents' => 'Post content',
];
$this
->actingAs($this->bob)
->post('/posts/store', $body)
->assertRedirect('/posts');
$this->assertDatabaseHas('posts', $body);
}
My test has 3 assertions. First, I'm making sure that an unauthenticated user cannot create a post - they are redirected to the login page. Second, as an authenticated user, I want to be redirected to the list of posts after successfully creating a new post. Third, I also want to make sure that the post data was saved to the database.
Running this test produces the following error:
Response status code [500] is not a redirect status code.
Failed asserting that false is true.
...
Well, that's not very helpful. $this->withoutExceptionHandling();
to the rescue! I plug it in quickly as the first line in my test and...
public function i_can_create_a_post()
{
$this->withoutExceptionHandling();
...
... the output is not what I would expect.
Output
There was 1 error:
1) Tests\Feature\ExampleTest::i_can_create_a_post
Illuminate\Auth\AuthenticationException: Unauthenticated.
...
Um... what gives? I already know that my first assertion passed. I know that because I wrote that statement first and the test was green. After writing the next 2 assertions, it went red. It looks like it catches the first action/assertion:
$this
->post('/posts/store')
->assertRedirect('/login');
Instead of the second action/assertion (which is what triggers the 500 Server Error
):
$this
->actingAs($this->bob)
->post('/posts/store', $body)
->assertRedirect('/posts');
So it turns out that withoutExceptionHandling
needs to be right above the piece of code that fails, and not at the beginning of the test, as I had thought until now. Correcting my mistake:
...
$this->withoutExceptionHandling();
$this
->actingAs($this->bob)
->post('/posts/store', $body)
->assertRedirect('/posts');
...
Output
Ah, this is the real issue.
There was 1 error:
1) Tests\Feature\ExampleTest::i_can_create_a_post
Illuminate\Database\QueryException: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'content' in 'field list' (SQL: insert into `posts` (`user_id`, `title`, `content`, `updated_at`, `created_at`) values (3, A new post, Post content, 2019-06-02 15:40:01, 2019-06-02 15:40:01))
...
Caused by
PDOException: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'content' in 'field list'
...
That's more like it. This was the error I was looking for, and it makes all the sense in the world. Notice the little typo contents
vs content
.
And that's all there is to it. By discovering this, withoutExceptionHandling
's utility has increased in my eyes.
On a final note, withoutExceptionHandling
takes an additional argument, which is an array of exceptions that you want it to ignore. If you want to find out more about the inner workings of this function, you can find it at vendor/laravel/framework/src/Illuminate/Foundation/Testing/Concerns/InteractsWithExceptionHandling.php
.
The expected behavior when downloading, say, a .jpg
file is for the file to be saved with the designated name and the .jpg
extension. Well, turns out that Safari appends a .html
at the end, thus saving the file as filename.jpg.html
instead of filename.jpg
.
After a little googling I came across this discussion that helped me isolate and fix the problem. The interesting bit here is the curl -I
command.
Running this command against my download link, I got the following (in this particular case I'm trying to download an icon file with an .ico
extension):
curl -I http://sikrt.test/d/oc6anhsjt
HTTP/1.1 200 OK
Server: nginx/1.15.0
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
X-Powered-By: PHP/7.2.12
0: Content-Type: application/octet-stream
1: Content-Disposition: attachment; filename="oc6anhsjt.ico"
Cache-Control: no-cache, private
Date: Mon, 03 Jun 2019 02:23:03 GMT
Content-Disposition: attachment; filename=oc6anhsjt.ico
Set-Cookie: XSRF-TOKEN=eyJpdiI6IlVoXC8yOTVtQWRVT3ZuUmVqWitOcWhBPT0iLCJ2YWx1ZSI6InA3aHpSWE5pR0o3cUV5cEdjQXJySE4yVFJsdFVqQk5UOUtyXC9UQTZ2TXRLYlBUYWk1aFJ3UU5hVWk4TE5ibTdYIiwibWFjIjoiODY3ZmQ4ZTU1YzNjODRmODU2ZTgyNDJhN2Q2YjczNzRjY2MyZGIwZDVhMjFhZmMxNDU1NDJlNjZhOGM0NzYyZSJ9; expires=Tue, 04-Jun-2019 02:23:03 GMT; Max-Age=86400; path=/
Set-Cookie: sikrt_session=eyJpdiI6IkpRWjhlUEJvTTltWTBkUGFGa1h5bWc9PSIsInZhbHVlIjoicTdycmQ1U3czSGhoS3BFNER5SGo3bFo4OG1yMFFwRGxWZTdmSmZcL1dPNVdTUmROY1VPMUV2YXRNOW9HK1pUb2oiLCJtYWMiOiIzMjMxYjIwODE4NjVhNGQ3OTRmN2ViZTgxMDRmYTMyOGFkMzA1ZTM3YTNiNzZmMGUxYjc5MjdiYmYwZGQ0MWU1In0%3D; expires=Tue, 04-Jun-2019 02:23:03 GMT; Max-Age=86400; path=/; httponly
The trick to get Safari to download the file with the proper extension is to send the correct headers, in this case Content-Type: application/octet-stream
.
Right off the bat you might notice this bit:
0: Content-Type: application/octet-stream
1: Content-Disposition: attachment; filename="oc6anhsjt.ico"
Something looks funky here, and that's because there shouldn't be a 0:
in front of the header. Why is that? Because I was passing the headers as an array of strings instead of an associative array. Laravel's documentation doesn't explain how the headers array should be structured. Running the curl command helped me diagnose the issue.
Before
[
'Content-Type: application/octet-stream',
...
]
After
[
'Content-Type' => 'application/octet-stream',
...
]
Following this fix, Safari was able to download the file with the correct extension.
Running curl -I
again produced the correct output:
curl -I http://sikrt.test/d/oc6anhsjt
HTTP/1.1 200 OK
Server: nginx/1.15.0
Content-Type: application/octet-stream
Content-Length: 610
Connection: keep-alive
X-Powered-By: PHP/7.2.12
Content-Disposition: attachment; filename=oc6anhsjt.ico
Cache-Control: no-cache, private
Date: Mon, 03 Jun 2019 02:27:05 GMT
Set-Cookie: XSRF-TOKEN=eyJpdiI6InIySU1uOEJcL0hDMFdyaUk3Q3BIKzB3PT0iLCJ2YWx1ZSI6IjVFaG9Iem1zeXY5UVdPeCtWdFkzXC95cVcwU2Njd0ZyMHFaMXd6bDQrUnJYNkJtRUV5THk4UlFPcjRXaTMzd2F0IiwibWFjIjoiZDA1MWU1YzEzYzVlMGE0OWZjMTIxNzdhOTNmMGU1YTY1MzRkMWYzMWU5M2RmYWZjMDVlZWU5YmUzYTU3ZjNhNCJ9; expires=Tue, 04-Jun-2019 02:27:05 GMT; Max-Age=86400; path=/
Set-Cookie: sikrt_session=eyJpdiI6IkxGT0lUZytKSXFuUWltOXZqSzAySHc9PSIsInZhbHVlIjoiazRDUHlUUHNEMTI3cGtkRktGVVFEMmo1QXRhc1MyVGE1OEdCNE9SZHJPWXRLbVowSXJLZDRcL3QwYTg4MVZFbFwvIiwibWFjIjoiNWU1NDllNDJiNTBiZDFiYzY0NThlNThjYTg4NmM0OGEyNzFmYmYwZjg1ODdhYzQzZjJjMzJhY2MyOWI0NThjOCJ9; expires=Tue, 04-Jun-2019 02:27:05 GMT; Max-Age=86400; path=/; httponly
The piece of code that handles the download looks something like this:
use League\Flysystem\Filesystem;
use League\Flysystem\Memory\MemoryAdapter;
// ...
$filesystem = new Filesystem(new MemoryAdapter()); // keep the file in memory
$filesystem->write('file', $filedata);
return response()->streamDownload(function () use ($filesystem) {
echo $filesystem->read('file');
}, $filename, [
'Content-Type' => 'application/octet-stream',
'Content-Length' => $filesystem->getSize('file'),
'Content-Disposition' => "attachment; filename=\"{$filename}\"",
]);
]]>Listening to right now:
On my radar:
I wasn't really a podcast person until I started listening to Full Stack Radio but from that moment I got hooked, and every 30m - 1h of idle time thereon was reclaimed. As it happens, my daily work commute falls between 30m - 1h which gives me the opportunity to listen to 2 episodes every day while I'm driving. I only wish I had done this years before, during hundreds of hours literally wasted driving to work through our lovely Chicago traffic.
There's a lot more to say about why I now believe that listening to profession-related podcasts is a very good thing, but for now I'll just mention that to me, personally, listening to something technical such as a developer podcast must be done under certain conditions. I can code while I'm listening to music, but I can't do that while listening to a podcast. The reverse is true while I'm driving. My driving muscle memory is strong enough that I can focus on the technical aspects discussed in the podcast.
Another way of looking at this from an engineering perspective is that, while coding, my brain needs to allocate most resources to the main process, and much less to the secondary process of listening to something. Driving is one of those activities that I can do very well on a secondary process. In over 20 years of driving without incidents I have yet to be proven wrong.
Beyond the immersion in a certain culture aspect (that I want to revisit in more detail in a future article), one huge benefit of hearing cool things about the technologies I work with on a daily basis is that it energizes me first thing in the morning. Imagine arriving at work all pumped up about a new technique or technology.
Bottom line, I've become a strong believer that there aren't much better ways to reclaim time wasted commuting (or engaging in a huge variety of brainless activities) than to listen to a podcast tied to your profession.
One word of caution though: take care while doing something potentially dangerous while wearing headphones. I had a close call once while riding my bicycle and listening to a podcast through headphones. In that situation my mind tends to become boxed in, and gradually loses connection to the outside. Lesson learned - I'll never do that again.
Let's dissect each of the developer podcasts I've listed, to see what makes them special. I won't go into too much detail about where to find each podcast but my platform of choice is Spotify. If it's not there, the chance of listening to it decreases for me, as I do most of my listening while driving. I'm not an iOS user either so iTunes is not a great option for me.
Hosted by Adam Wathan
Length 20m - 1h+
Spotify? Yes
What is it about? Adam's background is Laravel and Vue.js, and of course Tailwind CSS so a fair number of episodes revolve around this ecosystem, but there are a numerous discussions on a variety of industry topics, to quote, "everything from user experience and product design to unit testing and system administration". More often than not he interviews developers, designers, and people from the software development community.
If there's one podcast above all else that I would pick, this would be it. Once I discovered it, I listened to all the episodes, going back in time all the way to the beginning. I've learned something new or interesting from every single episode - 115 and counting as of this writing.
Hosted by Wes Bos & Scott Tolinski
Length 20m - 1h+
Spotify? Yes
What is it about? Wes and Scott are full stack developers deeply embedded in React-land. React is not really my cup of tea (Vue.js FTW) but this is my second most listened podcast because most episodes are not React-specific but rather touch heavily on general JavaScript issues, as well as a diverse range of topics relevant to developers. The two have great chemistry and are absolutely hilarious to listen to.
I found out about Wes Bos when I attended his talk on CSS Grid at Laracon 2018. Both him and Scott run their own businesses focused mainly on creating premium video tutorials on React and JavaScript-related subjects.
Hosted by Jake Bennett & Michael Dyrynda
Length 20m - 1h+
Spotify? Yes
What is it about? As the title suggests, this podcast is mostly about things happening in Laravel-land. Each week or so, Jake and Michael discuss new features that are added to the framework, as well as new packages and various other related topics. If you want to become a better Laravel developer I highly recommend it.
I started listening to this podcast recently and, once again, I fired up my time machine and went back all the way to the first episode. I didn't adopt Laravel very early on so I missed a lot of the goodness that happened in older versions. Because Laravel is my universe for the foreseeable future, I want to absorb as much of it as I can, so it makes sense to me to witness "history in the making", if you'll forgive the pun.
Hosted by Caleb Porzio & Daniel Coulbourne
Length 20m - 30m
Spotify? Yes
What is it about? Two developers from Tighten discuss programming topics for a quick and succinct 20-30 minutes. Since both are Laravel devs, quite a few discussions touch Laravel to some extent but I really enjoy the light, friendly banter between the two. It's like having a conversation over a beer with your best bud about programming and work-related stuff. Very insightful, relaxing, and I appreciate the short length of each episode, which makes it easy to pick one up at odd moments when I don't have a full hour to dedicate to another podcast.
Hosted by The folks from Basecamp
Length 20m - 30m
Spotify? Yes
What is it about? While this podcast is not specifically about programming, it is produced by one of my favorite companies in the world, and the subjects are nonetheless incredibly fascinating. Many of the episodes are about better ways to run a business, as well as various pitfalls that the company has encountered. Most episodes feature interviews with employees of the company in diverse positions (which makes it fascinating to take a peek inside Basecamp) but also with industry peers who hold similar views or face similar challenges. Overall it's a great insight into how a well-run company operates. The episodes are short and to the point, which makes them easily digestible at a moment's notice.
Hosted by Taylor Otwell
Length 10m - 15m
Spotify? No
What is it about? Very quick snippets by Taylor Otwell, the creator of Laravel. This one is a little weird because it's not a podcast per-se, rather more of an audio blog where he gives updates on how his work on Laravel and related products is progressing. Must-listen if you are a Laravel dev, as it gives insight into Taylor's mindset.
Hosted by Jeffrey Way aka Laracasts
Length 10m - 20m
Spotify? No
What is it about? Short random snippets, some touching on programming, others not so much, by Jeffrey Way. Jeffrey is a huge contributor to the Laravel community and the creator and owner of Laracasts and I enjoy these tidbits a lot. Too bad they're not up on Spotify.
Hosted by Caleb Porzio & Daniel Coulbourne
Length 20m - 30m
Spotify? No
What is it about? As far as I can tell, Twenty Percent Time morphed into No Plans to Merge. Both are hosted, of course, by Daniel and Caleb. I plan to go back and listen to this podcast as well, because I like the duo so much.
]]>While you can just follow the instructions there, I'm going to document the steps anyway, for my own sake.
Download the Windows x86-64 executable installer and run it.
python -m pip install --upgrade pip
pip install --upgrade pip setuptools pip install --upgrade httpie
That's it.
First of all, the command itself is http
, NOT ~~httpie~~. I was wondering why it wasn't working after I installed it.
I was interested in testing a GET API endpoint in a local environment that had a bunch of query parameters appended at the end of the URL.
Turns out that HTTPie has a special syntax for query parameters, which can cast to specific types if the value is not a string. Here are a few, but the documentation has more.
Strings name==john
or name=="john wick"
Numbers/Booleans year:=2015
or active:=true
Request headers key:value
or key:"value with spaces"
Here I'm making a GET request with some query parameters as well as a couple of headers.
GET example.com/api/quote?year=2015&name=john&birthday=06/21/2001&zipcode=60201
Headers:
Authorization:"Bearer tPOm3BXiYSv7fwnIN5dUCzpCy6sGH2Mdclj2BwBZvFw..."
accept:application/json
Command terminal (Windows/Mac/etc):
http GET example.com/api/quote year:=2015 name==john birthday==06/21/2001 zipcode==60201 Authorization:"Bearer tPOm3BXiYSv7fwnIN5dUCzpCy6sGH2Mdclj2BwBZvFwbDhLrAh0NmvtnyF4fdR3CbqAAdPQMPbSFYKXk" accept:"application/json"
The response:
HTTP/1.1 200 OK
Cache-Control: no-cache
Connection: keep-alive
Content-Type: application/json
Date: Sat, 19 May 2019 19:11:45 GMT
Server: nginx/1.15.0
Set-Cookie: XSRF-TOKEN=eyJpdiI6ImlCK3M4bXI3NXdwUmw3ekpTcEs...; expires=Sat, 19-May-2019 21:11:45 GMT; Max-Age=7200; path=/ Set-Cookie: laravel_session=eyJpdiI6ImZYMm10djgwS29i...%3D; expires=Fri, 25-May-2019 07:11:45 GMT; Max-Age=216000; path=/; HttpOnly Transfer-Encoding: chunked
{ "data": { "somenumber": 4242.42 }, "message": "Here is your data", "success": true }
If you get the following error when running it in Git Bash:
http: error: Request body (from stdin or a file) and request data (key=value) cannot be mixed. Pass --ignore-stdin to let key/value take priority...
Then just run the same command but append the --ignore-stdin
flag at the end.
So when I heard that Marcel Pociot (a prominent open-source contributor and Laravel expert) was working on a course on PHP package development I knew it was going to be worth every penny. And no, in case you're wondering, he's not paying me to say this, I'm just very happy with his course.
Owning the tools is cool but putting them to work is another story. Luckily, one thing I don't lack is a shortage of ideas. Initially I had a more ambitious package in mind (which will probably be the next one I build) but the need arose for a reusable and configurable piece of Laravel code, and after a Sunday of on-and-off coding I produced what I'm about to describe.
If you want to hear my rambling history of how this package came about, hang around. Otherwise, here's the short version of what it does.
laravel-silent-spam-filter does a very simple thing. It analyzes a string for certain keywords or phrases and returns true
if it finds those words or false
otherwise. Sounds useless but there's more depth to it.
First, the keywords are configurable in a config file. When you publish the package config, a file will be created under config/silentspam.php
that comes pre-loaded with just 2 entries:
return [
'blacklist' => [
'(beautiful|best|sexy) girls',
'girls in your (city|town)',
],
];
You can replace these and/or add your own. In this example, if a string contains any combination of "beautiful girls" or "best girls" or "sexy girls" or "girls in your city" or "girls in your town", it will be marked as spam.
Thus, the second feature is that you can use regex patterns to filter spam.
Third, you can use Laravel's facade to run the spam check, as shown below:
SilentSpam::isSpam('Find sexy girls in your city'); // true
Finally, while the keywords in the config are global to the entire application, you can add additional keywords at runtime, before calling isSpam
:
SilentSpam::blacklist([
'buy pills',
]);
SilentSpam::isSpam('Go to this site to buy pills'); // true
And if you feel lazy, there's also a notSpam
command, which is exactly the opposite of isSpam
.
SilentSpam::notSpam('This is a normal message'); // true
One of my projects named Sikrt has apparently caught the attention of some spammers who've latched on to the public-facing contact form, despite being protected by Google's Recaptcha. I wasn't getting a lot of spam, but it was constantly trickling in, at a rate of 1-2 a day. Perhaps Recaptcha was actually doing its job, or else I would have been swamped? Who knows. The fact is that it still let some of it through.
Now, these spammers (bots really) operate under the premise that any public form might be attached to a commenting system or something similar (like on a blog for example). They'll fill in and submit the form with their garbage which almost always contains links to whatever they're trying to promote. Very often these messages will get passed through, ending up as "legit" content on that page. From there, Google will pick up the links, and a few visitors will click them. Sometimes the form contents are emailed to the site owner's address, from where additional mischief can occur.
Unfortunately for them, all my contact form does is save an entry in my database. I didn't build a more complex solution because I don't need it at the moment. I can check messages directly in the DB. So the spam doesn't end up anywhere productive. There's still the matter of clearing these out occasionally (as I said, the volume received is low).
I thought I would automate this clearing-out process without complex checks or third-party APIs.
But before that, I wanted to try yet another protection method: the honeypot. And what package would be better suited for this than one from Spatie (a heavy-duty Laravel contributor). I added laravel-honeypot but unfortunately, just like Google Recapcha, it still lets some spam through. I was kinda expecting that. By now I would've thought that spammers have grown wise to this method and built smarter bots that can bypass it.
By analyzing the spam messages I had so far, I noticed some very obvious patterns and words that could be easily filtered by. It looks like all of it came from the same spamming "authority" so it was trivial to create a few very simple rules to handle those kinds of messages.
The way I decided to approach it was to simply not save anything in the database if it matched those patterns. The spammer would receive feedback that the form was submitted successfully, but the data would end up in a black hole.
Initially I built the pattern matching as a service in Sikrt itself, using TDD of course, but soon after launching it in production I decided it would make a great (if very basic) first package.
I wanted 2019 to be the year I released at least one open-source package and even though I had something more complex in mind, this was a great learning experience.
As I mentioned at the beginning, PHP package development is amazing in guiding you step by step through the PHP package-building process, including Laravel-specifics. Hand-crafting still remains a little tedious, especially for someone who hasn't done this a hundred times before, but luckily the author has also built Laravel Package Boilerplate which makes it trivial to scaffold the whole directory structure, along with all the bells and whistles he describes in the course.
Of course, as it oft happens in these pioneering moments, after moving my original logic to the package, I spent 3 hours trying to figure out why my tests were failing with a cryptic message, only to discover that I had the wrong namespace somewhere in my new code. Lesson learned.
My original spam filtering service worked something like this:
use App\Services\SpamService;
// Silently reject spam messages
if ((new SpamService(config('spam.blacklist')))->notSpam(request('message'))) {
// save the contact form data
}
My goal was to simplify the API by implicitly loading the blacklist from the config file, and also to be able to use Laravel's facade accessor.
Converting the code was simply a matter of copy-pasting it from the original service to the new package structure, replacing the calls with the facade accessor and making sure the tests still worked. Which, until I discovered the elusive mangled namespace, they didn't.
Overall, the experience was smoother than expected. Back in my Sikrt project codebase, it was even more simple to composer require breadthe/laravel-silent-spam-filter
and swap everything out. And the app continued to work perfectly.
## In conclusion
The package may be very basic but it's my package and I'm proud of it. Building it allowed me not just to dip my toes in this exciting new world, but also opened my eyes to what the package-building process entails.
You may ask why this is strictly a Laravel package. For one thing, I'm deeply embedded in the Laravel ecosystem and wouldn't have anything else at the moment. For another, as simple as it is, I don't think there would be much value in creating a general PHP package. After all, the core functionality is just a regex check. But if you are still upset about the Laravel exclusivity, you are always invited to contribute.
It's a bit of a drug. I'm already brainstorming what the next one should be.
]]>Many of these are curated from the amazing Server for hackers course.
ssh-keygen -t rsa -C "user@example.com" -b 4096
# optionally add a passphrase
Add a key to the keychain.
ssh-add -K ~/.ssh/id_rsa
# enter passphrase
List all the keys in your keychain.
ssh-keygen -l
pbcopy < ~/.ssh/id_rsa.pub
# or
cat ~/.ssh/id_rsa.pub | pbcopy
cat ~/.ssh/id_rsa.pub | /dev/clipboard
Shows info about the Linux distribution.
lsb_release -a
```
**Sample Output**
```bash
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
Shows system info.
uname -a
Sample Output
Linux thebolapp-staging 4.15.0-1035-aws #37-Ubuntu SMP Mon Mar 18 16:15:14 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
uname -r
Sample Output
4.15.0-1035-aws
uname -i
x86_64
Shows file system disk space (-h
for human-readable file sizes).
df -h
Sample Output
Filesystem Size Used Avail Use% Mounted on
udev 985M 0 985M 0% /dev
tmpfs 200M 816K 199M 1% /run
/dev/xvda1 20G 6.9G 13G 36% /
tmpfs 996M 0 996M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 996M 0 996M 0% /sys/fs/cgroup
/dev/loop0 18M 18M 0 100% /snap/amazon-ssm-agent/1068
/dev/loop2 90M 90M 0 100% /snap/core/6673
/dev/loop3 18M 18M 0 100% /snap/amazon-ssm-agent/930
/dev/loop5 92M 92M 0 100% /snap/core/6531
/dev/loop6 90M 90M 0 100% /snap/core/6818
/dev/loop1 18M 18M 0 100% /snap/amazon-ssm-agent/1335
tmpfs 200M 0 200M 0% /run/user/1001
Disk usage. Useful to check how much space a directory, its subdirectories, and files, occupy. These are just some of the most useful flags and options that I use.
storage/
check inside the specified directory (by default will check the current root)-c
shows a summary of the total-a
shows files in addition to directories-h
human readable format-d 1
looks 1 directory deep-t 50M
shows only files/directories over the specified size| sort -hr
sort by size (-h
sorts by human-readable sizes) (-r
sorts in descending order of size)Examples
Show the size of all 1st level subdirectories.
du -h -d 1
181M ./vendor
972K ./resources
112K ./config
113M ./storage
116K ./tests
728K ./app
148K ./database
36K ./routes
26M ./.git
40K ./bootstrap
9.0M ./public
331M .
Show the size of all 1st level subdirectories and files.
du -ah -d 1
181M ./vendor
368K ./composer.lock
4.0K ./.env
972K ./resources
4.0K ./.gitignore
112K ./config
113M ./storage
324K ./yarn.lock
4.0K ./server.php
20K ./README.md
4.0K ./composer.json
4.0K ./artisan
116K ./tests
728K ./app
4.0K ./.gitattributes
148K ./database
4.0K ./webpack.mix.js
4.0K ./phpunit.xml
4.0K ./.env.example
36K ./routes
32K ./tailwind.js
26M ./.git
4.0K ./.editorconfig
40K ./bootstrap
9.0M ./public
4.0K ./phpunit-printer.yml
4.0K ./package.json
331M .
Show the size of all 1st level subdirectories inside the storage/
folder.
du storage/ -cah -d 1
4.0K storage/oauth-public.key
3.2M storage/framework
110M storage/app
260K storage/logs
4.0K storage/oauth-private.key
113M storage/
113M total
Show the size of all 1st level subdirectories, with a summary of the total, sorted by size.
du -ch -d 1 | sort -h
36K ./routes
40K ./bootstrap
112K ./config
116K ./tests
148K ./database
728K ./app
972K ./resources
9.0M ./public
26M ./.git
113M ./storage
181M ./vendor
331M .
331M total
Show the size of all 1st level subdirectories that are larger than 20M, with a summary of the total, sorted by descending size.
du -ch -d 1 -t 20M | sort -hr
331M total
331M .
181M ./vendor
113M ./storage
26M ./.git
Shows memory + swap usage.
free -h
Sample Output
total used free shared buff/cache available
Mem: 1.9G 704M 361M 17M 925M 1.1G
Swap: 1.0G 39M 984M
Shows currently running processes.
ps -aux
Sample Output
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.4 225480 9072 ? Ss Apr09 1:35 /sbin/init
root 2 0.0 0.0 0 0 ? S Apr09 0:00 [kthreadd]
root 4 0.0 0.0 0 0 ? I< Apr09 0:00 [kworker/0:0H]
root 6 0.0 0.0 0 0 ? I< Apr09 0:00 [mm_percpu_wq]
root 7 0.0 0.0 0 0 ? S Apr09 3:18 [ksoftirqd/0]
root 8 0.0 0.0 0 0 ? I Apr09 9:29 [rcu_sched]
root 9 0.0 0.0 0 0 ? I Apr09 0:00 [rcu_bh]
root 10 0.0 0.0 0 0 ? S Apr09 0:00 [migration/0]
...
To sort by descending memory usage ps aux --sort -rss
.
As above but get first n lines ps aux --sort -rss | head -n15
.
Change user, group, other permissions.
chmod u=rwx,g=rx,o=-rwx .ssh
# equivalent to
chmod u+rwx,g+rx,o-rwx .ssh
More advanced example:
chmod u-rw+x,g-rw+x,o-r+wx XXX
# output
---x--x-wx 1 forge forge 0 May 17 19:14 XXX*
]]>My personal flavor of this typically responds with something similar to this for a successful request:
// Failed request
return response([
'success' => true,
'data' => $data,
'message' => $message,
], 200);
Or for a failed request:
// Failed request
return response([
'success' => false,
'message' => $message,
], 422);
Note The 422 Unprocessable Entity
status seems to be quite popular in the Laravel ecosystem so that's what I use for generic error codes.
Repeating the above snippets over and over for a myriad endpoints can get tedious, and is a good example of code that can be extracted into some sort of reusable entity.
Traits hold a special place in my heart. I like how they can be used to handle multiple inheritance but also to share some similar piece of functionality across different classes.
I keep my traits in app/Traits
, which is standard practice in a Laravel project. In this particular case I named my trait RespondsWithHttpStatus
(yeah, I know, it's always hard to name things). And here's how such a trait might be constructed:
trait RespondsWithHttpStatus
{
protected function success($message, $data = [], $status = 200)
{
return response([
'success' => true,
'data' => $data,
'message' => $message,
], $status);
}
protected function failure($message, $status = 422)
{
return response([
'success' => false,
'message' => $message,
], $status);
}
}
You can import this trait into any class or method where you need to return an HTTP response.
use App\Traits\RespondsWithHttpStatus;
class MyClass
{
use RespondsWithHttpStatus;
...
Respond with success
return $this->success(
'Here is your data',
[
'field1' => 'Field 1 data',
'field2' => 'Field 2 data',
]
);
Response 200 OK
{
"success": true,
"data": {
"field1": "Field 1 data",
"field2": "Field 2 data",
},
"message": ""
}
Respond with failure
return $this->failure('Invalid token');
Response 422 Unprocessable Entity
{
"success": false,
"message": "Invalid token"
}
Right away you can tell that in most situations where I'm returning a standard successful 200
code or a generic error 422
code, there's a lot less boilerplate to deal with but there's always the option to return a different status code if required.
May the Trait be with you!
]]>composer
or npm
/yarn
. Which is pretty much a gazillion times a day. Despite my job laptop having more horsepower than my personal MacBook Pro, the former takes a lot longer to perform any composer
or npm
task in the terminal.
Don't get me wrong, I'm not a Mac snob. I've used Windows PCs for most of my career, only switching to Mac a few years ago for development and, oh boy, I would never go back to a PC for any kind of PHP/JS or any kind of open-source work in general. Sometimes, though, an employer can insist on a specific platform, hence the subject of this article.
As a quick recap, the Windows 10 Pro environment in question runs from an SSD on an 8th-gen i7 machine with 16GB RAM. I typically use GitBash as my terminal of choice. I've tried the built-in Ubuntu shell as well as ConEmu which does a somewhat reasonable job of allowing multiple tabs, though it's buggy and I gave up on it. Instead, I just open multiple GitBash windows which is less than ideal but is mitigated by the fact that I have 3 screens at my disposal.
The main project I'm building and maintaining is a Laravel app with lots of Vue sprinkled in, in the form of single-file components.
Whenever I work in Vue, I fire up yarn watch
<-- Yarn master race 🙂. Well, here's what used to happen every time I saved my work. Webpack went through it's build process, but got stuck for a very long time at 91% with this message:
WAIT Compiling... 10:49:04 AM
91% additional asset processing
The whole process took close to 30 seconds. You can imagine this adds up throughout the day. It's long enough to be frustrating but short enough that I can't do anything else in the meantime but twiddle my thumbs.
Having chalked it down to Windows being... Windows, I just about gave up on a good dev experience, until I decided to seek a possible solution.
Well, despair no more fellow Windows hostages. This quick setting will speed up Webpack while it's watching for changes. Just add devtool: 'eval'
to your Webpack config as shown:
mix.webpackConfig({
devtool: 'eval',
plugins: [],
...
})
.extract()
...
Keep in mind that the Webpack configuration above is taken from a Laravel 5.8 project, meaning it's wrapped inside Laravel Mix but in a regular Webpack project you can use the same method.
You'll need to restart yarn watch
after adding this setting, but the watch build time drops down to 1.5-10 seconds, a 2x - 15x speed increase 🚀!
What I failed to mention (and it's an important one!) is that I don't use this technique in production, but merely in my local dev environment. In fact Webpack mentions just that in the devtool
documentation.
If you are curious if there's any different in the production bundle size without this option and after applying it, yes there is. Using devtool: 'eval'
produces a larger bundle. Here's a comparison (the CSS bundles are omitted because their size is not affected). The biggest difference is in the vendor bundle
With devtool: 'eval'
:
DONE Compiled successfully in 32631ms 10:12:56 AM
Asset Size Chunks Chunk Names
/js/app.js 656 kB 1 [emitted] [big] /js/app
/js/vendor.js 1.25 MB 3 [emitted] [big] /js/vendor
Done in 37.33s.
Without devtool: 'eval'
:
DONE Compiled successfully in 78692ms 10:23:16 AM
Asset Size Chunks Chunk Names
/js/app.js 407 kB 1 [emitted] [big] /js/app
/js/vendor.js 345 kB 3 [emitted] [big] /js/vendor
Done in 83.60s.
Happy Webpacking!
]]>There are situations where I need to remove one or more query parameters from my app's URL, and then return the new URL. Similarly, I want to be able to add a new parameter easily. Furthermore, in my Laravel 5.8 app, I want to invoke these helpers from anywhere in my code, including Blade templates.
This type of scenario is very common in filtering (or faceting) data by various (URL) parameters.
I made these two functions that do exactly that. Be aware that these are Laravel-specific (due to using the built-in url()
helper) but they can be easily adapted to be framework-agnostic.
Remove Parameters
/**
* URL before:
* https://example.com/orders/123?order=ABC009&status=shipped
*
* 1. remove_query_params(['status'])
* 2. remove_query_params(['status', 'order'])
*
* URL after:
* 1. https://example.com/orders/123?order=ABC009
* 2. https://example.com/orders/123
*/
function remove_query_params(array $params = [])
{
$url = url()->current(); // get the base URL - everything to the left of the "?"
$query = request()->query(); // get the query parameters (what follows the "?")
foreach($params as $param) {
unset($query[$param]); // loop through the array of parameters we wish to remove and unset the parameter from the query array
}
return $query ? $url . '?' . http_build_query($query) : $url; // rebuild the URL with the remaining parameters, don't append the "?" if there aren't any query parameters left
}
Add Parameters
/**
* URL before:
* https://example.com/orders/123?order=ABC009
*
* 1. add_query_params(['status' => 'shipped'])
* 2. add_query_params(['status' => 'shipped', 'coupon' => 'CCC2019'])
*
* URL after:
* 1. https://example.com/orders/123?order=ABC009&status=shipped
* 2. https://example.com/orders/123?order=ABC009&status=shipped&coupon=CCC2019
*/
function add_query_params(array $params = [])
{
$query = array_merge(
request()->query(),
$params
); // merge the existing query parameters with the ones we want to add
return url()->current() . '?' . http_build_query($query); // rebuild the URL with the new parameters array
}
For my particular use-case, I needed to be able to use these functions either from a controller (or other class), or directly in a Blade template. Though some detest the idea of global functions, Laravel uses this pattern a lot and it does make it a lot easier to build features and get things done.
Based on this StackOverflow answer, one way of creating a global helpers file is to follow the following steps.
helpers.php
file (containing your functions) in the bootstrap
folder.composer.json
"autoload": {
"classmap": [
...
],
"psr-4": {
"App\\": "app/"
},
"files": [
"bootstrap/helpers.php"
]
}
composer dump-autoload
.Now your helpers should be available globally throughout your app.
]]>One of the disadvantages of using the enum
column type in Laravel migrations - it turns out - is that you can't easily perform a migration on a table that contains an enum column. So it's not even a matter of changing the enum column itself, merely the presence of an enum column in a table will screw things up when trying to execute the migration.
In my scenario, I wanted to create a new migration in my Laravel 5.8 project to update a colum from string
to text
. The table in question contained an enum column.
When I ran the migration, I got the following error:
Doctrine\DBAL\DBALException : Unknown database type enum requested,
Doctrine\DBAL\Platforms\MySQL57Platform may not support it.
After some research I came upon this answer on Stack Overflow which clears things up a little.
Laravel's documentation does mention that in order to do this you first need to composer require doctrine/dbal
.
Following that, my solution became this:
?php
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class UpdateWidgetNameWidgetsTable extends Migration
{
public function up()
{
$this->registerEnumWithDoctrine();
Schema::table('widgets', function (Blueprint $table) {
$table->text('widget_name')->nullable()->change();
});
}
public function down()
{
$this->registerEnumWithDoctrine();
Schema::table('widgets', function (Blueprint $table) {
$table->string('widget_name')->nullable()->change();
});
}
private function registerEnumWithDoctrine()
{
DB::getDoctrineSchemaManager()
->getDatabasePlatform()
->registerDoctrineTypeMapping('enum', 'string');
}
}
Basically you need to register and map the enum
type to string
with Doctrine, using this snippet:
DB::getDoctrineSchemaManager()->getDatabasePlatform()->registerDoctrineTypeMapping('enum', 'string');
Admittedly, the registerEnumWithDoctrine
function should probably reside in the global scope but I think it's fine for the purpose of this example.
I haven't really had to simulate a specific HTTP exception but recently I needed to create some custom 404, etc error pages in a Laravel project and I wanted to test that.
To make a short story long, there were 2 reasons for that. First, that project was upgraded to Laravel 5.8 which changes the default error screens for 403, 404, 500, and possibly others. Gone are the lovely Steve Schoger illustrations + "Go Home" button, to be replaced with a generic "404 Not Found" message and no link to go back to the home page. I don't want to rely on my users to figure out what to do in an error scenario so I built a few error pages with a proper link to the site root. Second, I wanted to extend the site's layout, to be able to have the app header and footer.
Note Laravel makes it easy to create custom error pages. Simply create a file named 404.blade.php
or equivalent, and place it in views/errors
.
To test the various error statuses, I used this nifty 1-liner to "simulate" the error, although that's a misnomer because it's actually throwing an error.
So in the controller that would return a view (any view for the purposes of testing this), instead of returning the view, use this instead (import at the top, of course):
use Symfony\Component\HttpKernel\Exception\HttpException;
...
throw new HttpException(500);
When you load that view, you'll see the HTTP exception instead, which should load up the appropriate error page. Of course, this is not at all Laravel-specific.
]]><router-link>
.
I simply wanted to toggle a menu when hovering over a certain link. Easy, right? Well, the events simply weren't registering.
Note I am using Vue 2.6 for this example.
A bit of a head-scratcher but after some research I found out that all I needed to do was to apply the native
modifier to the event, as shown below.
Before
This code doesn't trigger the handler.
<template>
<router-link
to="/someroute"
@mouseenter="handleEvent"
@mouseleave="handleEvent"
>
Some Route
</router-link>
</template>
After
This works.
<template>
<router-link
to="/someroute"
@mouseenter.native="handleEvent"
@mouseleave.native="handleEvent"
>
Some Route
</router-link>
</template>
]]>I'm working on a new, fun project called Craftnautica. In short, it's a fansite (crafting helper) for the game Subnautica.
The entire app is a Vue SPA (Single Page App) and because it doesn't require a back-end, I'm hosting it statically on Netlify.
I use Vue Router to build nested routes. A very basic nested route would be https://craftnautica.netlify.com/sn
. Navigating to it via a link worked just fine after I deployed the code to Netlify, however, refreshing the page after the fact, produced the error message you see above. And that went on for every nested route in the app.
To make a long story short, I discovered that an easy fix is to inclide a netlify.toml
file in the root folder of your app, with the following:
[[redirects]]
from = "/*"
to = "/"
status = 200
Redeploy and all the nested routes and permalinks can be refreshed and accessed individually!
You can read more about Netlify's toml.xml file and how redirects work.
]]>If you do this often enough, you'll soon notice that create and update forms have generally the same fields and the only thing that is different is the action and the form submit endpoint. I admit that through laziness and lack of planning I've duplicated create and update form code often in the past. Here, though, is what I consider a better pattern to deal with this situation, with a minimum of code duplication.
I've been applying this technique a lot in my Laravel projects but it can be used in other frameworks or languages with a few tweaks.
In this example, I have 2 routes, one pointing to the create view, another to the update view, both containing the respective form templates.
** Create View **
@extends('layouts.app')
@section('content')
<h1>Create Widget</h1>
<form method="POST" action="{{ route('widget.store') }}">
@csrf
<input id="brand" type="text" class="{{ $errors->has('brand') ? ' is-invalid' : '' }}" name="brand" value="{{ old('brand') ?? $widget->brand }}">
... A LOT OF INPUTS
<button type="submit">Create</button>
</form>
</section>
@endsection
** Update View **
@extends('layouts.app')
@section('content')
<h1>Edit Widget</h1>
<form method="POST" action="{{ route('widget.update', $widget->id) }}">
@csrf
@method('PATCH')
<input id="brand" type="text" class="{{ $errors->has('brand') ? ' is-invalid' : '' }}" name="brand" value="{{ old('brand') ?? $widget->brand }}">
... A LOT OF INPUTS
<button type="submit">Update</button>
</form>
</section>
@endsection
It quickly becomes obvious that the only difference between the two forms is the method (POST
vs PATCH
) and the route (route('widget.create')
vs route('widget.update', $widget->id)
). Maybe this isn't as noticeable for short 1-2 field forms, but when you have lots of them, the pain gets real.
There must be a better way to de-duplicate the markup, right? There is.
The much more elegant solution is to extract the common form markup into a Blade partial and to include this partial, along with some metadata, in the original create/update views.
** New Create View **
@extends('layouts.app')
@section('content')
<h1>Create Widget</h1>
<form method="POST" action="{{ route('widget.store') }}">
@csrf
@include('partials._form', [
'widget' => new \App\Widget(),
'btnText' => 'Create',
])
</form>
</section>
@endsection
** New Update View **
@extends('layouts.app')
@section('content')
<h1>Edit Widget</h1>
<form method="POST" action="{{ route('widget.update', $widget->id) }}">
@csrf
@method('PATCH')
@include('partials._form', [
'btnText' => 'Update',
])
</form>
</section>
@endsection
** _form.blade.php
Partial **
<input id="brand" type="text" class="{{ $errors->has('brand') ? ' is-invalid' : '' }}" name="brand" value="{{ old('brand') ?? $widget->brand }}">
... A LOT OF INPUTS
<button type="submit">
{{ __($btnText) }}
</button>
With this new approach we use the exact same form input markup for both forms. The form tags along with the csrf
and method
inputs are now the skeleton containing the partial we extracted.
To allow the value of the field to be pre-populated if it exists (old('brand') ?? $widget->brand
), we either pass the $widget
model for an existing item, or we instantiate a new model when creating a new item.
For other differences between the two forms, we can pass data to the partial in an array, like we did here for the submit button label in the form of btnText
.
It's worth thinking about duplication ahead of time. Often we start building a form, template, controller or other logic and then we add a very similar behaviour for a different route or model or entity, only to realize that they both operate in a very similar manner. Good planning is easier said than done but it's never too late to go back and do some refactoring. For my part, I always try to keep my code as DRY as possible.
]]>Massive mea culpa: I didn't read the TailwindCSS documentation carefully, or I just didn't pay enough attention, but the framework does contain support for SVG fill and stroke. Basically you can apply .fill-current
and/or .stroke-current
to an svg
element and presto, your icon is colored based on the .text-<color>
class that is also applied to it.
What's even funnier is that Adam specifically mentions in the documentation the two SVG icon libraries I love the most: Zondicons and Feathericons. Since one is fill-based, while the other is stroke-based, this new and improved dynamic component wrapper should work equally well for both libraries.
As a result, the API for my dynamic icon component can be simplified a lot.
This
<v-icon icon="menu" fill="red"></v-icon>
becomes
<v-icon icon="menu" class="text-red fill-current stroke-current"></v-icon>
But I can simplify it even more by always applying fill-current
and stroke-current
inside the scoped CSS of the VIcon.vue
component:
<style>
svg {
@apply cursor-pointer;
@apply inline-block;
@apply stroke-current;
@apply fill-current;
}
</style>
I no longer need the fill
prop, nor the dynamicFill
computed props.
Another thing that is no longer required is the size
prop. It turns out that you can simply apply Tailwind w-
and h-
classes to the svg
element. However, I decided to keep the size
prop in order to offer finer control over icon sizing, in pixels. However, to keep the UI consistent, I would strive to use Tailwind's classes instead.
In summary, to generate a blue 12px menu icon I would do this:
<v-icon icon="menu" class="text-blue h-3 w-3"></v-icon>
Note The above computes to 12px in my case because I have the font size on the root body
element set to 16px. Hence, h-3
and w-3
are defined as 0.75rem in Tailwind's default config, which evaluates to 0.75 * 16px = 12px.
There you go, while the previous version works just as well, this updated one - I think - is simpler and overall better.
]]>UPDATE 8 Feb 2019 In my excitement I failed to realize that TailwindCSS makes it even easier to accomplish what I've outlined in this article, with fewer lines of code and a simplified API. Check out my follow-up article for the details.
As a developer with designer aspirations I've found it a little cumbersome to use SVG icons in my projects. Unfortunately SVG is not as straightforward to use as a popular font icon library such as FontAwesome. There are all kinds of considerations to keep in mind, amongst them fill color and size.
And then there's the verbosity of the code required to render this stuff.
Compare the code for a FontAwesome plane
icon...
<i class="fas fa-plane"></i>
... with the code for the same type of icon from one of my favorite SVG icon libraries, Zondicons:
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M8.4 12H2.8L1 15H0V5h1l1.8 3h5.6L6 0h2l4.8 8H18a2 2 0 1 1 0 4h-5.2L8 20H6l2.4-8z"/></svg>
I mean, who wants to deal with this stuff? And the example above is actually a simple one. The more complex the icon, the more complex the code.
And yet, SVG icons have a lot of advantages over font icons.
I started moving away from FontAwesome towards SVG icons the same way I've ditched Bootstrap in favor of TailwindCSS.
So one of the things I've tried to accomplish is to create a reusable Vue component that I can wrap around the SVG icon definition. That way I don't have to worry about SVG code polluting my views but I can also control the icon's properties (fill and size for now) through a consistent interface in the form of props.
Yeah, I'm still dependent on Vue but all of my projects these days include it, whether I'm building a Laravel app (which comes with Vue) or a front-end app (which would also be Vue). While this article is written from Vue's perspective, I'm sure other front-end frameworks can accomplish the same thing.
After several iterations on the concept I arrived at this version that I feel covers most use-cases that I've encountered. The idea of a dynamic component surfaced after reading this article on dynamic Vue components.
The end-goal is to be able to invoke my dynamic SVG icon like this:
<v-icon icon="menu" fill="red" :size=32></v-icon>
I also want to have a sensible default, so if do this...
<v-icon></v-icon>
... I'll see a square 24px x
icon colored grey-darkest
going by Tailwind colors.
For this example I'm using the awesome free SVG icon library Feathericons. I just love the subtle style of these icons.
I created this mostly for Vue components inside of a Laravel project because this is what I'm mainly working with on a daily basis. There are some differences compared to a full Vue app, amongst them being the fact that I use auto-import and registration of components in Laravel. In a pure Vue app components are imported in a slightly different way, but that's not the subject of this exercise.
Finally, for fill
I wanted to be able to use TailwindCSS's color classes, so the icon wrapper component is dependent on that.
Starting with a components
folder which contains all my .vue
single-file components, I'll create a folder named icons
and yet another folder named svg
inside of that.
In the icons
folder I have single Vue file named VIcon.vue
. (If you're wondering why the V, it's sort of a Vue component naming convention). This is the icon wrapper component that handles all the logic and figures out which icon to load. Here's what it contains:
<template>
<svg
xmlns="http://www.w3.org/2000/svg"
:width="dynamicSize"
:height="dynamicSize"
:fill="dynamicFill"
:stroke="dynamicFill"
viewBox="0 0 24 24"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
class="feather"
:class="icon"
>
<keep-alive>
<component
:is="dynamicIcon"
:size=dynamicSize
:fill="dynamicFill"
></component>
</keep-alive>
</svg>
</template>
<script>
import {colors} from '../../../../../tailwind';
export default {
props: {
'icon': {
'type': String,
'required': false,
'default': 'x',
},
'size': {
'type': Number,
'required': false,
'default': 24,
},
'fill': {
'type': String,
'required': false,
'default': 'grey-darkest',
}
},
computed: {
dynamicIcon: function () {
return `v-${this.icon}`; // default icon: x
},
dynamicSize: function () {
return this.size; // default size: 24
},
dynamicFill: function () {
return colors[this.fill]; // default fill: grey-darkest
}
}
}
</script>
<style>
svg {
@apply cursor-pointer;
@apply inline-block;
}
</style>
Next, inside the svg
sub-folder I can dump all my icons. Based on the Feathericons library, I will name the files VX.vue
, VMenu.vue
, VArrowDown.vue
and so on. Here's what VMenu.vue
contains.
<template>
<g>
<line x1="3" y1="12" x2="21" y2="12"></line>
<line x1="3" y1="6" x2="21" y2="6"></line>
<line x1="3" y1="18" x2="21" y2="18"></line>
</g>
</template>
Notice that I moved the svg
wrapper from the original Feathericon .svg
file to the parent component and replaced it with a SVG group g
.
The dyamic component magic happens here:
<component
:is="dynamicIcon"
...
computed: {
dynamicIcon: function () {
return `v-${this.icon}`; // default icon: x
},
...
If the code is not self-explanatory, basically I'm using Vue's component
tag along with the :is
prop to load a component whose name is computed. If I were to load the component statically I would do:
<v-menu></v-menu>
Because I'm receiving the string menu
in my :icon
prop, the computed property dynamicIcon
becomes v-icon
. At this point Vue knows how to render the correct component.
Next I'll bind a few dynamic properties on the svg
tag with (computed) component props:
<svg
...
:width="dynamicSize"
:height="dynamicSize"
:fill="dynamicFill"
:stroke="dynamicFill"
...
If you take a closer look at dynamicFill
you'll notice the definition is:
dynamicFill: function () {
return colors[this.fill]; // default fill: grey-darkest
}
So what is this weird colors[this.fill]
stuff? Well, I'm also importing the colors
object from Tailwind's config file, typically located in the root of the project and named tailwind.js
. Because it's a JS file, this is easy to do. Here's I'm simply referencing a key in the colors
object.
If I were to render an icon like this...
<v-icon
fill="red-darkest"
></v-icon>
... then dyamicFill
translates to colors['red-lightest']
and returns the string #3b0d0c
which is how Tailwind's red-lightest
color is defined. This, in turn, is applied to the SVG fill and stroke properties.
I hope that made sense but that's all there is to it. Here are a few more examples of how I would use this component.
<v-icon :size=12></v-icon>
<v-icon></v-icon>
<v-icon :size=32></v-icon>
<v-icon :size=12></v-icon>
<v-icon></v-icon>
<v-icon icon="menu" fill="green" :size=32></v-icon>
]]>Wink is a new open-source blogging platform from Mohamed Said (a core Laravel contributor) that was released in late 2018. I follow Mohamed's work with a lot of interest and I was stoked when he announced this project. At the time I was actively searching for an engine to drive my future (and current) blog but nothing felt right. Wordpress was out of the question, for many reasons. Wink seemed just right, because it was built on top of Laravel, my favorite (and day-to-day) PHP framework. Match made in heaven, right?
To make a long story short, I installed Wink as soon as it came out, and integrated it into Omigo.sh. It worked really well, but at the time I had just launched the site and deployed it to Linode. The site itself was also built on Laravel. Soon after, I realized that what I had done was overkill. I needed a much simpler solution.
You see, my plan for Omigo.sh is to have it serve as a central hub that will showcase my personal projects, with a dev blog attached. Realistically I would never need the full power of a Laravel/MySQL server. Static pages would be just what the doctor ordered. This meant Wink was not really a good match, despite its ease of integration.
Enter Jigsaw, an open-source static site framework also built on Laravel, from the good people at Tighten.co. I'd heard about it before but never really gave it much thought, not really sure why. I had a vague feeling that Jigsaw was a static site generator which was in line with my needs, but I was afraid it didn't come with all the other features I would require for a blog. Boy was I wrong.
One day I had a little extra time on my hands and decided to give Jigsaw a whirl. An hour was enough to convince me to switch to it.
Wink is built on Laravel, as I mentioned, but it requires a PHP server to run, as well as a database (MySQL). In that respect it is similar to Wordpress (though a great deal less complex).
Jigsaw also uses Laravel but mostly under the hood, to build the static site bundle. It also lets you use Laravel's familiar templating system to customize the generated output in almost any way you want. Jigsaw is also more flexible than Wink in that it can generate a full site (with various pages and content), not just a blog.
I really wanted a static blogging platform, one that saves posts and blog settings as simple text files that you can version control in a Git repository instead of saving them in a database.
+1 for Jigsaw.
Wink's bread and butter is the back-end used to author blog posts. It has a rich-text editor that allows you to apply formatting, insert images, etc. Love it, except for the fact that I very much prefer to write technical posts and articles in Markdown. For that, I don't need a back-end, just a text editor.
Jigsaw on the other hand uses Markdown as the default way to author content.
+1 for Jigsaw.
Wink does not (at the time of this writing) come with any front-end templates. It's up to you to render the blog content in any way you want, preferably with Laravel. And this is just what I did for the first iteration of this blog. It's a fairly trivial process, for a Laravel dev, but at the same time it would be nice to have a starter template that you can customize.
Jigsaw has 2 starter templates that you can pull in optionally when you install it the first time. One is a blog, the other is a documentation site (I already have a few good uses in mind for the latter). The blog template basically scaffolds the entire blog so you can go ahead and use it right out of the box if you want.
What I really liked about Jigsaw is that the front-end is all built on TailwindCSS, my all-time favorite CSS framework. And this makes it super-easy to customize.
+1 for Jigsaw.
There are a few critical features that I need in a tech blog. Wink doesn't render any content out of the box so I would have to build all that myself. Not really something I looked forward to.
Jigsaw comes with all these configured and ready to go:
+1 for Jigsaw.
There's another, very important, area where Jigsaw shines over a classic DB-driven engine like Wink or Wordpress. It's the tiny matter of hosting the site/blog and storing the content. A static site generator means that I don't need a PHP server, nor a database to store my content. It also provides the added benefit of being able to host the entire site with a free service such as Netlify or Surge.
+1 for Jigsaw.
OK, I'll hand it to you. A solution such as Jigsaw is not perfect for every use-case. There are more complex features that would be very hard, if not impossible, to implement without a server-side language and a database. One of them (correct me if I'm wrong) is a custom commenting system. Yes, you can easily integrate third-party systems (which I will very likely do in the future) but if you want to have full control over users and comments, then you'll need a fully-fledged blogging framework. And this is where Wink has the potential to beat Jigsaw.
+1 for Wink.
While this is not necessarily a critical point, Jigsaw does happen to be more mature than Wink. Which means that a lot of the kinks have been ironed out by a wider range of contributors, and less breaking changes can be expected going forward. I'll give Jigsaw the advantage here again.
+1 for Jigsaw.
Once I found out Jigsaw supported all the features that I wanted in a blogging platform, I was all over it. While custom-building these features is entirely doable, my end-goal in this case was to be able to blog as soon as possible, not to engage in endless coding exercises, regardless how educational they might be.
The Omigo.sh blog is in its very early stages but I very much doubt I'll be changing the engine again anytime soon.
The reality is that platforms like Wink and Jigsaw are very developer-, language-, and framework-centric. If you're not a developer, or don't like tinkering with code, or not a PHP developer, or not even a Laravel developer, neither of these might be the right solution for you. It so happens that Jigsaw checks all the tech boxes for me: static, markdown, Laravel, Vue, TailwindCSS. Which just happens to be the entire focus of this blog.
]]>You see, one of the things that prevented me from launching a blog a lot sooner was finding the right blogging platform. This is far from trivial. In fact, much like finding the right name for a variable in programming, picking a blogging platform demands a lot of neuron-power and time wasted with suboptimal solutions. Now, I'm confident that I've arrived. I will stick with this engine for a long while.
The first incarnation of Omigo.sh along with the blog was built on top Laravel, powered by Wink, hosted on Linode, and deployed with Forge.
While the front-end looked almost identical for both the original and the current version, my specific needs weren't exactly met by the initial tech stack. Everything worked but, for reasons I will detail in a future post, kept me figuratively awake at night.
My current stack is: blog engine powered by Jigsaw from Tighten.co, (built on Laravel), and hosted/deployed for free on Netlify. As you can see, my stack is leaner and also cheaper. Well, I still need Linode and Forge for my other projects, but Netlify ensures that at least the blog can remain hosted for free in the future.
I'll write up a lengthier explanation on why I switched but for now, just a quick hint: markdown.
Oh, and so far, Jigsaw has exceeded expectations.
]]>When I ran into this particular problem with Laravel's filesystem, I had less experience with it than I do today (relatively speaking).
To give a little background, for my Laravel development environment I use Homestead on Mac and Windows both.
In one project I wanted to upload some images to the public folder. The Laravel documentation says that you should run the php artisan storage:link
command in order to symlink the public folder (it maps storage/app/public
to public/storage
). By default, uploaded files are private.
In my case, the command seemed to succeed locally but in the browser I couldn't load the images even though they appeared to be in the correct folder. Eventually I determined that the symlink in the Homestead environment was incorrect.
Digging deeper, the problem is that the artisan command seems to create an absolute link to the storage folder, which (if you run the command locally) propagates to the synced environment (Homestead) which has an entirely different project structure for your website. As a result, if you check where the created symbolic link on the server (storage/app/public
) is pointing to, you will see something to the effect of:
/Users/LocalUserName/code/MyProjectName
Which, of course, is not the same as public/storage
.
So what happens if I run php artisan storage:link
in the Vagrant box? I get this fairly bizarre error.
symlink(): No such file or directory
at vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:315
311▕ */
312▕ public function link($target, $link)
313▕ {
314▕ if (! windows_os()) {
➜ 315▕ return symlink($target, $link);
316▕ }
317▕
318▕ $mode = $this->isDirectory($target) ? 'J' : 'H';
319▕
+16 vendor frames
17 artisan:37
Illuminate\Foundation\Console\Kernel::handle()
Hmm... what now?
The solution proved simple:
First, ssh to the server, navigate to your project folder, then delete the symlink from the public folder:
cd public
unlink storage
Finally, run the command to create the symlink manually (assuming we are still in public/):
ln -s ../storage/app/public storage
Voila, now your images should work correctly.
]]>I've been a software developer (in some form or another) for most of my life, and if there's one certainty in this career it's the constant need for self-improvement.
I started Omigo.sh as a central platform from which I can showcase the personal projects that I am tinkering on in my spare time (yes, I have a developer day job as well), but also as a repository of coding knowledge that I have collected over time.
My intent is to use the Omigo.sh blog for recording techniques and solutions to problems that I have discovered while working with my favorite platforms, including (but not limited to) PHP, Laravel, Vue.js, TailwindCSS, Mac, PC, etc.
As much as possible, I would like to avoid duplicating content that lies a short Google search away, because after all, whenever I find myself in deep waters without an obvious solution, it's one of the first things I try. However, it's entirely possible that things will be repeated here at some point, but I can assure you it wasn't intentional.
Having said all that, and with reasonable certainty that probably no one will read this post anyway, I will now venture wherever the keyboard will lead me.
]]>