[
	{
		"content": "Git offers 3 levels of configuration:\n\nLocal. Specific to each local repository, it is stored in the .git/config. It includes branch tracking, remote definition, etc.\nGlobal. This is user-specific and is stored at the root of the user account, in the ~/.gitconfig file. This is the one we are interested in here.\nSystem. Usually stored in /etc/gitconfig, it is shared by all users. It is seldom used.\n\nPriorities run in the usual order: from local to system.\nMost users just put in their name, email and editor preferences into their .gitconfig. This is an appalling under-use. There are many, many aspects of Git that can be configured, and therefore improved, through configuration.\nThis article explores my personal configuration to show you many useful tips.\n\n\nThis article is completed by a (more up-to-date) second part.\n\nFirst of all, make sure you are using a recent Git (1.8+). Many of the configuration options discussed below have only recently been introduced. If an option seems to be ignored in your system, check the name and value carefully, and then look at the release notes to find the version in which the option appeared.\nI maintain a generated public version of my configuration in this Gist. By “generalized” I mean that I remove, or comment out, some processing that I think is tricky to enable by default, because it involves subtle Git behaviors that you should know about before applying them.\nA Git configuration file is similar in syntax to a good old-fashioned INI file: sections are delimited by a [name.section] header, and the property lines that follow belong to the section. Indentation is recommended but not required. The linearised representation of a property is section.name, e.g. core.whitespace for the whitespace property within the [core] section.\nHere is the global gist already:\nLet’s now look at these settings one by one.\nDefault identity for commits\nThis is as basic as it gets: attributing your future commits. We use user.name and user.email for this. However, don’t mess up the Git config of a deployment user on a production server with this kind of stuff, if there are several people who can deploy. Instead, attribute your commit with git commit --author=... so you don’t alter the settings.\nColoring\nMany Git commands are able to colorize their output using ANSI VT codes when the terminal allows it. The auto value has the same meaning as in many other Linux commands (e.g. ls): Git will detect whether it is being used by a VT terminal capable of handling color codes, and if so, will leverage that capability.\nThe color.ui property is an umbrella setting for all more specific color-handling properties, e.g. color.branch, color.diff, color.grep, color.interactive, etc. Basically, we want colors!\nAliases\nGit only allows full command names by default (commit, checkout, etc.), which sometimes makes Subversion users cringe when starting with Git, searching in vain for their git ci. Apart from the fact that a properly configured prompt will offer advanced completion (that’s a topic for another post), the real reason is that Git lets you create as many aliases as you’d like, using the settings in the [alias] section. Each property is named after the alias, and its value is the Git command line without the leading git.\nMy configuration provides 3 aliases I couldn’t work without:\n\nci for commit\nst for status\nlg for advanced contextual log which, as far as I’m concerned, is just as good as graphical logs (branch graph and merges, symbolic references, tags and branch heads, SHAs, authors and relative timestamps). As a bonus, it works in a terminal, hence via SSH…\n\n\nPagination, whitespace and other “core” settings\nBy default, Git paginates everything it displays: if it exceeds the height of the terminal, it uses less. Personally, I can’t stand that: if I want to paginate, I | less myself! By specifying via core.pager to go through cat rather than less, I de facto disable paging.\nWhen opening an editor (especially for commit messages or setting up an interactive rebase), Git chooses the editor like this:\n\nThe GIT_EDITOR environment variable\nThe core.editor setting (my favorite way)\nThe VISUAL environment variable\nThe EDITOR environment variable\nThe default binary set at Git compilation time; usually vi.\n\nIf the default behavior isn’t your cup of tea, you need to specify the command line to run. When using a graphical editor, you’ll need to invoke it so that it’s in wait mode, meaning that it doesn’t yield back control until the relevant file is closed. Not all graphical editors let you do this.\nHere is a list of some command lines!\n☝️ If you are on Windows: if your terminal does not recognize the editor opening command, you should consider passing the full path to the executable (this usually depends on the terminal used).\n\nVSCode: code -w\nSublimeText: subl -w.\nGEdit: gedit -w -s\nGVim: gvim --nofork\nCoda: you will need to setup coda-cli (usually through the always-handy Homebrew) then coda -w.\nNotepad++: yo",
		"description": "Detailed overview of must-have settings for efficiently using Git everyday.  A must-read.",
		"date": 1364947200,
		"image": "/assets/images/art-vid/art-git-config.jpg",
    "_tags": ["tutoriel","git"],
		"title": "A well-honed Git configuration",
		"url": "https://delicious-insights.com/en/posts/git-configuration/",
		"locale": "en",
		"readingTime": "9 min"
	},	{
		"content": "TL;DR\nA git merge should only be used for incorporating the entire feature set of branch into another one, in order to preserve a useful, semantically correct history graph. Such a clean graph has significant added value.\nAll other use cases are better off using rebase in its various incarnations: classical, three-point, interactive or cherry-picking.\nA clean, usable history that makes sense\nOne of the most important skills of a Git user lies in their ability to maintain a clean, semantic public history of commits. In order to achieve this, they rely on four main tools:\n\ngit commit --amend\ngit merge, with or without --no-ff\ngit rebase, especially git rebase -i and git rebase -p\ngit cherry-pick (which is functionally inseparable from rebase)\n\nI often see people put merge and rebase in the same basket, under the fallacy that both result in “getting commits from the branch across in our own branch” (which is, by the way, incorrect).\nThese two commands actually have hardly anything in common. They have entirely separate purposes and, indeed, are not supposed to be used for the same reasons at all.\nI shall try to not only highlight their respective roles, but also equip you with a few reflexes and best practices so you can always produce a public history that is both expressive (concise yet clear) and semantic (viewing the history graph reflects the team’s goals in an obvious way). A top-notch history adds significant value to the whole team’s work, be it contributors coming in for the first time or getting back after a while away, project leads, code reviewers, etc.\nWhen should I use merge?\nAs its name implies, merge performs a merge, a fusion. We want to move the current branch ahead so it incorporates the work of another branch.\nThe real question you should ask yourself is this: “what does this other branch represent?”\nIs it just a local, temporary branch, that I had just created out of precaution, in order for master to remain clean in the meantime? If so, it is not only useless but downright counter-productive for this branch to remain visible in the history graph, as an identifiable “railroad switch.”\nIf the receiving branch (say master) has moved ahead since the branch started, and is therefore not a direct ancestor of it anymore, we’ll treat our branch as “too old” and use rebase to replay its commits on top of our up-to-date master to maintain a linear graph. But if master remained untouched since we branched out, a fast-forward merge (which would be automatic in that situation, by default) will be sufficient.\nIs it a “well-known” branch, clearly identified by the team or simply by my work schedule? Then we turn our previous reasoning on its head. Our branch may represent a sprint or story in our agile methodology, or an issue/ticket in our bug tracking system.\nIt is then preferable, perhaps even mandatory, that the entire extent of our branch remain visible in the history graph. This would be the default result if the receiving branch (say master) had moved ahead since we branched out, but if it remained untouched, we will need to prevent Git from using its fast-forward trick. In both these cases, we will always use merge, never rebase.\nWhen should I use rebase?\nAs its name suggests, rebase exists to change the “base” of a branch, which means its origin commit. It replays a series of commits on top of a new base.\nThis is mostly needed when local work (a series of commits) is deemed to start from an obsolete base. This could happen several times a day, when you try to push local commits to a remote only to be denied because the tracking branch (say origin/master) is stale: since it last sync’d with our remote, someone pushed updates to it, so that pushing our own code path would overwrite that previously-sent, parallel work. This is not nice to our collaborators, so push gives us the boot.\nA merge (which is what pull would do internally, by default) is less than ideal here, as it creates noise, wrinkles if you will, in the history graph, when the whole thing is really just a timing glitch in the sequence of work on the branch. In an ideal world, I would have worked after the others, from an up-to-date base, and the branch would have remained nicely linear.\nA need for rebase also arises when you started a parallel avenue of work (an experiment, an R&amp;D work…) a long time ago but haven’t found time for it again until just now, except the base branch—the one from which your experimental one started out—has moved on considerably since. When you finally hunker down to work on your experiment again, you’d like to start from a more recent base, so you can benefit from its bug fixes and other nice evolutions. But a merge (e.g. of master in experiment) is not what you’re looking for here, even from a conceptual standpoint.\nThere is a final use case, an extremely frequent one actually, for rebase: it’s not about changing the base here, it’s about cleaning the series of commits in the branch. In real life, our ",
		"description": "Each one is best for specific purposes, so learn when to use them efficiently, and why.",
		"date": 1399161600,
		"image": "/assets/images/art-vid/art-git-merge-vs-rebase.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Getting solid at Git rebase vs. merge",
		"url": "https://delicious-insights.com/en/posts/getting-solid-at-git-rebase-vs-merge/",
		"locale": "en",
		"readingTime": "17 min"
	},	{
		"content": "So where am I right now, and how do I upgrade?\nAt the time of this writing, the current release is 2.1.0, from Aug 15, 2014.\n(At the time of bringing this article back to Delicious Insights web properties, the latest version is 2.11.0, from Nov 29, 2016.)\nThe official website maintains official builds for Mac and Windows, plus sources should you want/need to compile instead.\nOn my Mac, I tend to favor Homebrew for such installs and upgrades; it’s usually up-to-date within 48 hours.\nYou can check out your version like so:\n\nTo upgrade on Windows, just use the latest provided download from the official website.\nOn a Mac, use either the official installer if you had gone that route already, or perhaps use an Homebrew upgrade:\n\nOn Linux, if you’re going through official packages (or registered PPAs), use the proper install/upgrade command (usually apt-get on Debian/Ubuntu, yum on Fedora, etc.)\nA word on Debian and Ubuntu\nFor Debian and Ubuntu, official releases usually lag a bit behind, especially LTS releases, that pretty much stick to whatever was available when they originally got out:\n\n10.04 LTS “Lucid Lynx” is on 1.7.0 (Sep. 2010)\n12.04 LTS “Precise Pangolin” is on 1.7.9 (Jan. 2012)\n14.04 LTS “Trusty Tahr” is on 1.9.1 (Mar. 2014)\n16.04 LTS “Xenial Xerus” is on 2.7.4 (Mar. 2016)\n\nI cannot recommend enough that you register the official PPA:\n\n(If you do not have the add-apt-repository command, you need to install the necessary package first☺\n\nThis way you’ll install git from a very recent source (the PPA is usually up-to-date within 3–4 days of original release).\nA curated list of what’s been added to Git\nI’m not interested in paraphrasing the release notes: I’d rather focus on new features than on bug/perf fixes, and even then only pick what’s relevant to me.\nSpecifically, I’ll leave aside everything related to foreign interfaces (bridges between Git and other source control systems) and graphical tools (gitk, git-gui, git-web…), and plumbing commands as well, to focus on porcelain stuff (commands your average user is likely to use on a reasonably frequent basis).\nRather than roll out a section per version, I’ll use three large intervals, intentionally set on the base versions for recent Ubuntu LTS releases.\nBefore we go in, note that something does get better in a consistent way as versions increase: shell completions and advanced prompts. Unless there’s a particularly cool upgrade, I won’t mention it everytime. And yes, at this point the provided prompts for Bash/Zsh run circles around Oh My Zsh’s fancy prompts.\nFrom 1.7.0 to 1.7.9\nThere’s been a lot of 1.7.x releases (12!), but most useful updates got in at 1.7.2, which has long remained my “minimum required version.”\nWe start with numerous safeguards against classic submodule pitfalls:\n\ngit diff and git status are more detailed and explicit when it comes to local changes to submodules, which helps avoid forgotten commits/pushes on these. Various options and configuration variables let you tweak these news behaviors.\ngit fetch --recurse-submodules appears in 1.7.4, to save a few seconds.\nNice safeguard from 1.7.7 on with git push when local commits in submodules haven’t been pushed yet: git push --recurse-submodules=check.\nStarting with 1.7.8, Git folders for submodules are not embedded in the submodule’s root directory anymore on checkout, but stored in the containing repo’s .git/modules, and referenced from the submodule’s gitfile (that is, the submodule’s .git is a text file with a gitdir: path-to-actual-git-dir line). The reason is that it lets you change the referenced commits in submodules from the containing repo, without having to actually check out submodules just for that.\n\nLogs also got some nice love:\n\nThe :/pattern revision syntax, which up until 1.7.1 used a simple text and anchored at the beginning of commit messages, is now interpreted as a non-anchored regex, which is much more useful, really.\nWe get some nice colors in git log --decorate from 1.7.2 on.\nBeyond git log -S: 1.7.4 brings us the regex variant, git log -G. I use this all the time!\ngit log also accepts globs (e.g. *.rb) for paths, starting with 1.7.5.\nA sweet config option, log.abbrevCommit, shows up in 1.7.6, that dispenses with full-length SHAs in logs and, actually, other commands. Less noise.\n\nA few important—some critical—things happened with merges, rebases, pushes and pulls, too:\n\nHere comes the sweet --keep mode for git reset (a variation of git reset --merge, which later turned into git merge --abort)\ngit cherry-pick (and incidentally git revert) learned in 1.7.2 to use merge strategies (using --strategy), which is super handy when dealing with subtrees, for instance.\nThe important config variable merge.ff showed up in 1.7.6, driving the default behavior of git merge when it comes to fast forwards.\nFrom 1.7.6 on, git rebase without parameter won’t just no-op when you’re on a tracking branch: it’ll assume git rebase @{u} and rebase on upstream. Not very useful (I would ",
		"description": "A lot of people use Git without quite tracking what’s coming up in later releases. Sometimes you just go with whatever’s available on your Linux distro, even if that is quite outdated.",
		"date": 1409184000,
		"image": "/assets/images/art-vid/art-git-recent-evos.jpg",
    "_tags": ["git","releases"],
		"title": "What’s new since Git 1.7",
		"url": "https://delicious-insights.com/en/posts/whats-new-since-git-1-7/",
		"locale": "en",
		"readingTime": "8 min"
	},	{
		"content": "You think you know Git? Maybe you do… And yet, I’d bet my shirt that many cool little command-line options remain unknown to you.\nIndeed, as Git versions march on, a lot of such options surface, be it about more comfort, more raw power, or additional safeguards. As they are not a new command per se though, they are usually not touted as much and go under your radar.\nI selected here about thirty options, spread across roughly fifteen commands, that will make your Git life more enjoyable. This makes for an excellent ROI over your next few minutes of reading!\nI will generally put the option right in the section title, intentionally. Still, do not skip a section because you think you know that option: I may use it on another command than the one you think, or for another reason, that may be news to you. Also, I often slap on extra info on associated options and configuration variables.\nPartial (un)staging with -p\nSo you opened a file for a specific reason, perhaps make that damned tracker asynchronous… And you notice in passing that ARIAL roles are missing from a few UX items, and that the footer is still hard-coded instead of coming from the layout, and what not…\nWhen you’re about to commit, you realize that file contains a solid half-dozen (if not more) edits that span multiple unrelated topics. You then have three possible routes:\n\nYou spew a big fat ugly kitchen-sink commit, complete with a lousy message full of “+” signs or, if you’re even lazier, the time-honored useless “Changes,” “Fixes,” “Lots of stuff,” etc.\nYou copy-paste the file somewhere then start undo-ing, if that’s even possible, to only keep the first top, commit, re-apply changes for the second, commit again, then the third… All of this by hand, naturally. Screw-up probability: 99%.\nYou read this, or attend our training classes, and know -p!\n\nThe git add -p command is actually a refinement of git add -i: it pre-selects the interactive add patch mode. In practice, you tell it what file you want to operate on, to go even faster. For instance:\n\nLet me seize that opportunity to remind you that git add is not about putting a file under version control, but to stage an edit, that is, to confirm that edit as a part of the next commit.\nWhen you perform such an add, Git will auto-split the content in hunks, which are groups of edits, using proximity inside the file, and unchanged lines for splitting. If your edits are too close together, Git will probably not auto-split, and you’ll have to do it yourself using the s key (Git will provide a plethora of possible commands, by their initials, in a prompt. If in doubt, use ? to display help), which here stands for split.\nNote that even if you have adjacent edits (edits without unchanged lines between them), you can edit the snapshot on the fly to make it look like what you intend to stage, using the e (edit) command. It’s sort of express Photoshopping for your snapshot. Actually, if you know from the get-go that your file has such adjacent hunks, you can pre-select that mode using the -e option instead of -p. In that case however, Git will not pre-split other hunks for you.\nWhen you’re done, your file will normally appear as both staged and modified. That’s to be expected, as indeed:\n\nthe latest committed version isn’t the same as the staged one: your file thus appears staged.\nthe staged version isn’t the same as the file in the working directory: your file thus appears modified.\n\nYou can check out the diff for the staged version using git diff --staged index.html. If you want to the see the whole staged snapshot, instead of diffs, you can go with git show :0:index.html (that’s a zero, not an O letter).\nAfter that, be extra careful not to do a git commit -a (for instance, git commit -am &quot;Asynchronous tracker&quot;), as that -a will auto-stage every known edit, thereby overwriting the “sculpted” stage you had put together.\nFinally, few people know that git reset also features a -p option, which has the exact same UX as in add, but obviously does the opposite: it unstages selected hunks. It’s often used to split the latest commit, by doing something like this:\n\nEdits are then presented as cancellations of those in the latest commit. You tell which cancellations you want, amend the commit (see below), then complete the extra commit(s) you want with the remaining modifications.\nIt’s a very “quick and pro” way of splitting a commit inside an interactive rebase, using its edit command.\nProperly account for renames using -A\nYou may know that, by default (at least before 2.0), git add behaved as git add --no-all or, if you prefer, git add --ignore-removal. It only used the working directory as a basis to compute its list of files to take into account, which therefore included:\n\nModifications to known files\nNew files\n\nOn the other hand, files known to Git’s index but not found in the disk anymore, which appeared as removed, were left aside.\nThis was a problem for renames and moves, which result in both a “dele",
		"description": "You think you know Git? Maybe you do… And yet, I’d bet my shirt that many cool little command-line options remain unknown to you.",
		"date": 1410739200,
		"image": "/assets/images/art-vid/art-git-30-cli-options.jpg",
    "_tags": ["git","tutoriel"],
		"title": "30 Git CLI options you should know about",
		"url": "https://delicious-insights.com/en/posts/30-git-cli-options-you-should-know-about/",
		"locale": "en",
		"readingTime": "14 min"
	},	{
		"content": "So you fixed a conflict somewhere in your repo, then later stumbled on exactly the same one (perhaps you did another merge, or ended up rebasing instead, or cherry-picked the faulty commit elsewhere…). And bang, you had to fix that same conflict again.\nThat sucks.\nEspecially when Git is so nice that it offers a mechanism to spare you that chore, at least most of the time: rerere. OK, so the name is lousy, but it actually stands for Reuse Recorded Resolution, you know.\nIn this article, we’ll try and dive into how it works, what its limits are, and how to best benefit from it.\nThe usual suspect: control merges\nA situation where rerere comes in really handy is control merges.\nPicture this: you’re working on a long-lived branch; perhaps a heavy feature branch. Let’s call it long-lived. And naturally, as time passes, you get more and more apprehensive of eventually merging this branch in the main development branch (usually master), because as time goes by, the divergence thickens…\nSo to relieve some of that tension and ease up the final merge you’re heading towards, you decide to perform a control merge now and then: a merge of master into your own branch, so that without polluting master you can see what conflicts are lurking, and figure out whether they’re hard to fix.\nIt is indeed useful, and just so you won’t have to fix these later, you would be tempted to leave that control merge in the graph once you’re done with it, instead of rolling it back with, say, a git reset --merge ORIG_HEAD and keep your graph pristine.\nSo as time passes, you get a graph that looks like this, but worse:\n\nThis is ugly and pollutes your history graph across branches. After all, a merge should only occur to merge a finalized branch in.\nBut if you cancel that control merge once you’re done, you’ll have to re-fix these conflicts all over again next time you make a control merge, not to mention on final merge towards master. So what’s a developer to do?\nrerere to the rescue\nThis is exactly what rerere is for. This Git feature takes a fingerprint of every conflict as it happens, and pairs it with a matching fix fingerprint when the problematic commit gets finalized.\nLater on, if a conflit matches the first fingerprint, rerere will automagically use the matching fix for you.\nEnabling rerere\nrerere is not just a command, but a transverse behavior of Git. For it to be active, you need at least one of two conditions to be met:\n\nThe rerere.enabled configuration setting is set to true\nYour repo contains a rerere database (you have a .git/rr-cache directory)\n\nI can’t quite fathom a situation where having rerere enabled is a bad idea, so I recommend you go ahead and enable it globally:\n\nA conflict shows up\nLet’s say you now face a conflict-bearing divergence; perhaps master changed your &lt;title&gt; in index.html a certain way, and long-lived did otherwise.\nLet’s try a control merge:\n\n\nThis looks like your regular conflict, but do pay attention to the third line:\n\nThis tells us that rerere lifted a fingerprint of our conflict. And indeed, if we ask it what files it’s paying attention to on this one, it’ll tell us:\n\nIf we look into our repo, we’ll indeed find the fingerprint file:\n\nThis preimage file contains the full fingerprint of the file and its conflict (the entire blob, if you will).\nRecording the fix\nOK, so let’s fix this conflict. For instance, I’ll go with the following combined title:\n\nI can then verify what rerere will remember once I complete the merge:\n\nI can then mark this as fixed the usual way, with a git add. Then git rerere remaining will tell me what other files I should look into (right now, none).\nAt any rate, for rerere to effectively remember the fix, I need to finalize the current commit. This being a merge, it falls to me to manually perform the commit:\n\nPay attention to the second line:\n\nAnd indeed, that fix snapshot is now a postimage in our repo:\n\nSo I can go right ahead and roll back that control merge, because I don’t want to pollute my history graph with it:\n\nThe conflict re-emerges\nLet’s now assume that long-lived and master both keep marching on. Perhaps in the former, a CSS comes up. And in the latter, the same CSS appears (albeit with different contents), along with a JS file.\nThe time comes when a new control merge seems in order. Here we go:\n\nWe have an add/add conflict for the CSS, and the well-known conflict for index.html. But look more closely around the end:\n\nAs you can see, the conflict about index.html is known already, and has been auto-fixed. Indeed, if you ask git rerere remaining what’s up, it’ll tell you that only style.css is still in trouble.\nSo let’s start with marking index.html as being okay, by staging it:\n\nBy the way, if you prefer rerere to auto-stage files it solved (I do), you can ask it to: you just need to tweak your configuration like so:\n\nFrom now on, I’ll consider you have this setting on. As we did before, let’s fix the remaining conflict, and then:\n\nWe now have two pairs of fin",
		"description": "So you fixed a conflict somewhere in your repo, then later stumbled on exactly the same one (perhaps you did another merge, or ended up rebasing instead, or cherry-picked the faulty commit elsewhere…). And bang, you had to fix that same conflict again. That sucks.",
		"date": 1415059200,
		"image": "/assets/images/art-vid/art-git-rerere.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Fix conflicts only once with git rerere",
		"url": "https://delicious-insights.com/en/posts/git-rerere/",
		"locale": "en",
		"readingTime": "8 min"
	},	{
		"content": "Oh boy, are branches great. They let you have entirely different versions of a given file, depending on the context.\nThe thing is, in a few (not so rare) situations, you may want to version a file that changes from branch to branch, but retain its current content when merging another branch into yours.\nThe usual suspects are non-sensitive files that vary based on the runtime context (development, staging, production) because they contain URLs, domain names or port numbers that need to adjust:\n\ne-mail server configuration that would use a local handling outside of production (e.g. through the excellent mailcatcher)\nlog configuration that would dump to local-disk files in dev, but consolidate to some central service otherwise\netc.\n\nThis kind of file sure needs versioning. But when you merge another branch into yours (say you’re doing a control merge of master), how do you retain your current version for just these files, without having to resort to special commands or custom workflows?\nThat’s easy, actually. Let me show you how.\nTo retain our current version of a file during a merge (a merge is always incoming, remember: we merge into the current branch), we need to make use of an oft-ignored Git feature: Git attributes.\nGit attributes\nThis mechanism lets us map files or folders (we use globbing patterns such as secure/* or *.svg) to specific technical properties.\nThese mappings are usually versioned themselves, just like what we would put in .gitignore files, but these are stored in .gitattributes (and just like .gitignore has a strictly-local buddy at .git/info/exclude, we also have .git/info/attributes).\nThe format is simple: every line that neither is empty nor starts with a hash (#) sign to denote a comment uses a globbing-pattern = attribute-info format (the amount of whitespace being irrelevant).\nAn attribute can be set (present with no specific value), unset (present in negative form), set to a value or unspecified. For our purpose here, we’ll use a specific value.\nWhile this lets us create custom attributes, or group together attribute combos as meta-attributes, Git does come with a fair number of predefined attributes that let you do amazing things…\nMerge drivers\nWhat we’re interested in here is the merge attribute, that lets us map files to a merge driver, a command responsible for the actual merging of these files.\nThis attribute has default values based on the detected type for this file: it would normally be considered text or binary.\nWe can, however, create our own merge drivers (and define these in our usual Git configuration, say our ~/.gitconfig file), then use attributes to map specific files to our drivers. Git can call such a driver with up to three arguments, in whatever order we specify: paths to the common-ancestor (merge base, in Git parlance) version of the file, to our version, and to the merged branch’s version.\nThe key point is that such a pilot is supposed to store the result of the merge in our own file if it manages the merge properly, which it indicates by exiting with a zero exit code (as per POSIX usual). So, a driver that does not touch the files and exits with code zero leaves our current file alone during a merge.\nEureka!\nWe don’t even need to write an empty script (or one that would just exit 0), because in any Bash/zsh/shell environment you’ll find a true command, often a shell built-in, that does just that. Let’s use that.\nSetting up\nSo let’s start by defining a merge driver that would always favor our current version of the file, by making use of the existing true command. We’ll call this driver ours, to keep in line with similar merge strategies:\n\nDo you already have a Git repo for testing? Oooh, let’s smudge it! Or, let’s just whip a repo up:\n\nNow let’s add a .gitattributes file at the root level of our repo, that would tell email.json to use that driver instead of the standard one:\n\nThere, we’re good to go!\nPrepping for a test run\nLet’s just put ourselves in a relevant test situation, first with a file that will start as common before branching out:\n\nThen let’s make a demo-prod branch and put some mixed work in there:\n\nFinally, let’s go back to our previous branch and add some mixed work in it too:\n\nAlright, go!\nOK, we’re all set to test this baby. If we attempt to merge our current branch in demo-prod, the demo-shared file should merge normally (without conflicts, too), but we should retain our production variant of email.json:\n\nVictory! 💪\nI’d like to thank Scott Chacon who, in the chapter about attributes of his Pro Git book, put this tip forth; also, Julien Hedoux who, by just asking me how this could be done, had me delve into the issue and dig this up.\nEdit: this only applies to files that require a merge, during an actual merge. So, rebasing skips this, but more importantly, during a merge, if the file was only modified in the merged branch since the merge base, as no merge is required, the modified version will still apply. Still, it’s valuable for changed-",
		"description": "Oh boy, are branches great. They let you have entirely different versions of a given file, depending on the context. The thing is, in a few (not so rare) situations, you may want to version a file that changes from branch to branch, but retain its current content when merging another branch into yours.",
		"date": 1417132800,
		"image": "/assets/images/art-vid/art-git-merge-preserve.jpg",
    "_tags": ["git","tutoriel"],
		"title": "How to make Git preserve specific files while merging",
		"url": "https://delicious-insights.com/en/posts/how-to-make-git-preserve-specific-files-while-merging/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "Oooh, what a nasty bug you just noticed! Alas, you can’t seem to find out where it originates just now, and it appears to have been around for a while, too… How can you avoid combing through the entire history?\nIn this post, we’ll see how Git assists us in isolating a bug’s original commit as fast as possible, even if it ends up being far back in our log.\nThe right way to comb through your history\nA commit log is nothing but a sorted list. What’s the sorting criterion? Time! Commits run from the oldest to the newest, even if they sometimes branch out and merge back in along the way.\nWhen you look for something in a sorted list, it would be a shame to simply start at the beginning and walk your way to the end… You probably already played the “Higher, Lower” game: you have to find a number between, say, 1 and 100. In such a situation, I would worry about someone starting around 1, or around 100, then picking candidates at random. Instinctively, most people start at the middle, hence 50, and if they are told “lower,” pick the middle of the resulting subset, 25, and so on and so forth.\nThis kind of algorithm has a name: binary search, also referred to as dichotomic search. It lets you find what you’re looking for using at most log2(n) attempts, which for a [1;100] set is 7 tries. It gets even more impressive as the set grows significantly: for [1;1,000,000,000], you’d need at worst only 27 guesses! A massive time saving…\nYou can apply this principle to searching for the first commit that, in a commit log (a time-ordered series of commits), introduced a bug.\nBy the way, the mathematical application of binary search is called bisecting, which gave its name to the git bisect command.\nMethodology\nThe git bisect command has a number of subcommands.\n\nWe start with a git bisect start. You can provide a test range on the fly (a bad commit, generally the HEAD, and a good commit), otherwise you’ll define them next:\nA git bisect bad states the first known faulty commit (if you don’t give any, it will be assumed to be HEAD, as per usual).\nA git bisect good states a known good commit (a commit which doesn’t exhibit the bug). This should be as close as possible to the faulty one, but at worst you can pick a far-away commit to avoid sifting through recent history.\nFrom that point on, bisecting starts: Git checks out in the middle of the range (or thereabouts), tells us where we’re at, and asks for a verdict: depending on the situation, we’ll reply with a git bisect bad or git bisect good (and more rarely, git bisect skip).\nAfter a few rounds, unless we answered garbage or left too many commits unanswered, Git will tell us what the original faulty commit was.\nWe can then get out of bisecting with a git bisect reset.\n\nAll together now\nIn order to practice this, let’s use a sample repo I lovingly crafted for you, with plenty of wacky commit messages and 4 contributors many of you will undoubtedly recognize…\nDownload the sample repo now\n\nUncompress this wherever you please: it creates a bisect-demo directory in which you then open a command line (if on Windows, prefer Git Bash). This repo contains over 1,000 commits spread across a year or so and, somewhere in there, a bug slipped in.\nYou see, if you run ./demo.sh, it displays a subdued KO, when it should instead clarion a flippant OK. This issue goes back quite a long way, and we’ll use git bisect to hunt it down.\nIn this case we have no idea what the latest correct commit was, so let’s take the first commit, d7ffe6a. We first check that demo.sh looked good in it:\n\nRight, this should be fine…\nArmed with this knowledge, we can now start bisecting:\n\nNote we could have started this procedure with a single command:\n\nFrom there, all we have to do is test each proposed commit, and reply with good or bad:\n\nNotice the final display:\n\nAnd indeed, the listing mentions a modification on demo.sh.\nHere, if our prompt is to be believed, we are indeed on bisect/bad, the faulty commit. This isn’t necessarily so when bisect is done, it entirely depends on the path it followed through the commit log, and once the faulty commit is identified, bisect doesn’t automatically check it out.\nAt any rate, a git show 465194a will prove that this is indeed where the issue got in:\n\nLet’s not forget to stop bisecting and get back to our original HEAD, using a git bisect reset:\n\nAnd there you go! Although the faulty commit was 881 positions back, it only took us 10 tests to hunt it down! Even with our fast test protocol, we saved a lot of time. Imagine when the test protocol is slower (compiling, driving execution, etc.): the speed gain then becomes enormous.\nUntestable/ignorable commits\nIt can happen that specific commits, or even whole commit ranges, need not be tested. Either because you can’t test it (obsolete libs and dependencies, change of processor architecture…) or because you know they will not exhibit a testable behavior. In such situations, you can simply answer with a git bisect skip.\nYou can actu",
		"description": "Oooh, what a nasty bug you just noticed! Alas, you can’t seem to find out where it originates just now, and it appears to have been around for a while, too… How can you avoid combing through the entire history?",
		"date": 1418083200,
		"image": "/assets/images/art-vid/art-git-bisect.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git Bisect: quickly zero in on a bug’s origin",
		"url": "https://delicious-insights.com/en/posts/git-bisect/",
		"locale": "en",
		"readingTime": "8 min"
	},	{
		"content": "If you used submodules before, you certainly got a few scars to show for it, probably swearing off the dang thing. Submodules are hair-pulling for sure, what with their host of pitfalls and traps lurking around most use cases. Still, they are not without merits, if you know how to handle them.\nIn this post, we’ll dive deep into Git submodules, starting by making sure they’re the right tool for the job, then going through every standard use case, step by step, so as to illustrate best practices.\nSubmodules, like subtrees, aim to reuse code from another repo somewhere inside your own repo’s tree. The goal is usually to benefit from central maintenance of the reused code across a number of container repos, without having to resort to clumsy, unreliable copy-pasting.\nIn the remainder of this text, I’ll call such reused code, present somewhere inside container repo trees, a “module.” As for project code that reuses said module somewhere inside its working directory’s tree, I’ll call that a “container.”\nAre they the right tool for the job?\nThere are a number of situations where the physical presence of module code inside container code is mandated, usually because of the technology or framework being used. For instance, themes and plugins for Wordpress, Magento, etc. are often de facto installed by their mere presence at conventional locations inside the project tree, and this is the only way to “install” them.\nIn such a situation, going with submodules (or subtrees) probably is the right solution, provided you do need to version that code and collaborate around it with third parties (or deploy it on another machine); for strictly local, unversioned situations, symbolic links are probably enough, but this is not what this post is about.\nOn the other hand, if the technological context allows for packaging and formal dependency management, you should absolutely go this route instead: it lets you better split your codebase, avoid a number of side effects and pitfalls that litter the submodule space, and let you benefit from versioning schemes such as semantic versioning (semver) for your dependencies.\nIf the technological context allows for packaging and formal dependency management, you should absolutely go this route.\nAs a reminder, here’s a list of the main languages and their dependency management / packaging systems and registries:\n\n\n\nLanguage\nTool / Registry\n\n\n\n\nClojure\nClojars\n\n\nErlang\nHex\n\n\nGo\nGoDoc\n\n\nHaskell\nHackage\n\n\nJava\nMaven Central\n\n\nJavaScript\nnpm, Bower\n\n\n.NET\nnuget\n\n\nPerl\nCPAN\n\n\nPHP\nComposer / Packagist / Pear\n\n\nPython\nPyPI\n\n\nRuby\nBundler / Rubygems\n\n\nRust\nCrates\n\n\n\nHonestly, if you can manage your code dependencies by packaging reused code cleanly in “centralized” modules and using dependency management tools, do it. For real. Honest. This will save you a world of pain (and you don’t necessarily have to publish your packages out in the open, these systems often allow for private packages too).\nStill, if you have a solid requirement to embed reused code right inside the container code, then you are left with a choice between submodules and subtrees.\nSubmodules or subtrees?\nIn general, subtrees are better. Hey, I’m doing a bang-up job of selling you this post, aren’t I? The fact is that submodules and subtrees are radically different, almost opposite in fact, be it in their concepts or their behavior.\nMost people go with submodules for a few common reasons. Submodules have been around for a good long while, have their own Git command (git submodule), detailed docs, and a behavior not entirely unlike Subversion externals, which makes them feel falsely familiar. Adding a submodule is very simple (a quick git submodule add), especially compared to adding a subtree. Only later do all the pitfalls and traps come and bite everyone, every day.\nIt’s precisely because submodules have caused so many poor unsuspecting Gitters pain that we chose to cover them first, and subtrees later (our next in-depth article).\nStill, sometimes, submodules are the right choice. It’s especially true when your codebase is massive and you don’t want to have to fetch it all every time, a situation many tentacular code bases grapple with. You then resort to submodules so your collaborators don’t necessarily have to fetch entire blocks of the code base. Various open-source projects use submodules for precisely that reason (or because of heavy modularization not natively handled by their main language’s ecosystem).\nYou should also strive for submodule code to remain independent of particularities of the container (or at least, rely on external configuration to handle such particularities), as submodule code is central code, shared across all container projects. Working around this by littering your submodule repo with container-specific branches is like opening Pandora’s box: it’s abusive coupling, going against modularization and encapsulation principles, and is sure to come back and bite your ankle at some point.\nSubmodule fundamen",
		"description": "If you used submodules before, you certainly got a few scars to show for it, probably swearing off the dang thing. Submodules are hair-pulling for sure, what with their host of pitfalls and traps lurking around most use cases. Still, they are not without merits, if you know how to handle them.",
		"date": 1419984000,
		"image": "/assets/images/art-vid/art-git-submodules.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Mastering Git submodules",
		"url": "https://delicious-insights.com/en/posts/mastering-git-submodules/",
		"locale": "en",
		"readingTime": "22 min"
	},	{
		"content": "A month ago we were exploring Git submodules; I told you then our next in-depth article would be about subtrees, which are the main alternative.\nUpdate March 25, 2016: I removed all the parts about our now-deprecated git-stree tool. You should look at the awesome git-subrepo project instead if you want that kind of goodness.\nAs before, we’ll dive deep and perform every common use-case step by step to illustrate best practices.\n\nSo here’s the promised article! If you haven’t read the submodules-related one, I urge you to read it first, if only to be able to contrast and compare both in a useful way, and to grasp the core needs better.\nIn particular, it is important that you assert you don’t have a choice and must resort to submodules or subtrees instead of a clean, versioned dependency management (which is always better, when doable).\nWe’ll contrast subtrees with submodules whenever relevant; if you have read the article in question, that’ll help you better internalize these details.\nThe terminology we’ll use here is the same as in our previous article: we’ll name “module” the third-party code we inject somewhere in our container codebase’s tree. The main project’s code, that uses the module internally, will be referred to as “container.”\nSubtree fundamentals\nA quick reminder of terminology first: with Git, a repo is local. The remote version, which is mostly use for archiving, collaboration, sharing, and CI triggers, is called a remote. In the remainder of this text, whenever you read “repo” or “Git repo”, remember it’s your local, interactive repo (that is, with a working directory alongside its .git root).\nWith subtrees, there are no nested repos: there’s only one repo, the container, just like a regular codebase. That means just one lifecycle, and no special tricks to keep in mind for commands and workflows, it’s business as usual. Ain’t life sweet?\nThree approaches: pick one!\nThere are three technical ways to handle your subtrees; although it’s sometimes possible to mix these approaches, I recommend you pick one and stick with it, at least on a per-repo basis, to avoid trouble.\nThe manual way\nGit does not provide a native subtree command, unlike what happens for submodules. Subtrees are not so much a feature as they are a concept, an approach to managing embedded code with Git. They mostly rely on the adequate use of classic porcelain commands (mostly merge and cherry-pick), along with a plumbing one (read-tree).\nThe manual approach works everywhere, and is actually quite simple, but requires a good understanding of the underlying notions so you execute the few procedures properly. We’ll use that as a starting point, because it offers the best degree of control over operations, and leaves us with complete freedom in how we manage history (including its graph) and branches…\nThe git subtree contrib script\nIn June 2012, with version 1.7.11, Git started bundling a third-party contrib script name git-subtree.sh in its official distro; it went as far as adding a git-subtree binding to it among its installed binaries, so that you could type git subtree and feel like it were a “native” command.\nIntegration stops there, however; the “documentation” is not a man page, and is therefore not installed as such. The usual help calls (man git-subtree, git help subtree or git subtree --help) are not implemented. A git subtree with no arguments dumps a short synopsis, without further info. Only the text file linked at the beginning of this paragraph provides info, and it is buried down in the contrib/ directory of your Git install.\nThis script, that I will henceforth refer to as git subtree, has a few notable merits: mostly it is robust and offers familiar syntaxes (add, pull, push…) on top of operations that are sometimes complex. However, it also comes with a few operations (e.g. split) and notions (e.g. --ignore-joins and --rejoin) that are rather confusing at first, not to mention its very peculiar understanding of --squash…\nMost importantly, it maintains a subtree-specific “branch” that gets merged on every git subtree pull and git subtree merge. This means it will clutter your graph forever, and I, for one, have a strong distaste for this.\nAnother issue is, it won’t let you pick which local subtree commits to backport with git subtree push: it’s an all-or-nothing affair. This contradicts one of the key benefits of subtrees, which is to be able to mix container-specific customizations with general-purpose fixes and enhancements.\nStill, it’s been here for a while and has therefore been considerably tested (both in the test suite and battle-testing sense), which is not to be dismissed.\ngit-subrepo\nFor a while, we used our own custom solution, named git-stree, that did a reasonable job meeting all our needs, but had a number of dusty corner cases where it would just fall apart. This article used to detail that tool, but starting March 25, 2016 it’s officially deprecated.\nThis is in favor of a wonderful third-party tool",
		"description": "A month ago we were exploring Git submodules; I told you then our next in-depth article would be about subtrees, which are the main alternative.",
		"date": 1422576000,
		"image": "/assets/images/art-vid/art-git-subtrees.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Mastering Git subtrees",
		"url": "https://delicious-insights.com/en/posts/mastering-git-subtrees/",
		"locale": "en",
		"readingTime": "16 min"
	},	{
		"content": "January 16, 2016. Hm…\nI’ve been thinking about this, on and off, for a couple months now. And now we’re here. Exactly 20 years ago today, almost to the minute when I sit down to write this, my professional IT career started.\nI was going to just let that pass without any special measure, but a good friend knew about this milestone of sorts, and was so kind as to send me a gorgeous bouquet of 20 yellow roses to mark the occasion. This moved me strangely, got my gears grinding for a couple hours, and led me to want to write something about it.\n\nA young man’s game?\nOn the one hand, this makes me feel a bit old. Not ancient, certainly, but not young, either. I’m 38, after all, which has made me the senior person to most FLOSS projects I’ve contributed to, or at most meetups I’ve gone to, but fortunately, I also talk or work with a number of people in the field I greatly admire, that are 5 to 15 years ahead of my own age. So.\nWell, most of the time I don’t. Sometimes I do catch myself thinking “man, 38 already, and what have you accomplished? Where’s your startup-turned-big-success? Or your project-that-changed-the-world?”. But see, it’s not just that this could still happen, however faint or random the chance of that is. It’s mostly that I revised my goals, or more exactly, I revised my priorities.\nPriorities\nI found my better half 14 years ago, which was quite early in my life. I’ve been my own boss, and running a pretty successful business that lets us live quite large, for almost 5 years now. 9 months ago, I hired my first employee, and that’s a full-time, well-paid engineering position. And 8 months ago, I finally became a father. My son Maxence is everything I could have hoped for, ten-fold.\nFuck you, startup game.\nI’m not even remotely interested in moving to the valley and playing Bubble-yet-again. I have health, home, family, friends, love and a decent amount of success. I even have a tiny slither of professional fame, for what it’s worth. If that’s not a great place to be, then I don’t know what is.\nStill, 20 years. Damn. That’s something. And what years these have been! Especially the first half. I’ve been incredibly lucky, although I do take credit for seizing the opportunities I was blessed to find on my path.\nAllow me to reminisce.\n20 years in 10 minutes\nOr something.\n1996–1999: The Delphi years\nOn January 16, 1996, sometime in the vicinity of 19:00, I got out of what amounted to my first job interview and sat down at a Windows 3.11 For Workgroups workstation to start seeding a Paradox database used by a large Delphi program.\nThis was at a small company in Paris that was then pretty much at the top of the Borland game, Europe-wide. Delphi 1 had gone out the year before and pretty much H-bombed the whole Windows development world. In 1996 and 1997, Delphi 2 and Delphi 3 would have the same effect.\nDelphi 1 pretty much H-bombed Windows development\nYou’ve got to remember (or learn, you youngsters) that Borland had invented IDE’s all the way back in 1987 with Turbo Pascal 4, and ruled supreme in the development tools universe.\n\nMy company featured an incredible R&amp;D team, average age 21 maybe, from whom I could learn an enormous amount of technical and professional skills in a short time span.\nIn March 1996, I started training professionals on Delphi, through that company. As months went by, I trained more and more difficult courses, and moved up the R&amp;D ladder to full-fledged engineering and architecture. And on November 4, 1997, the day I turned 20, I gave two major, cutting-edge talks at BorCon France 97, the nationwide conference on Borland tools.\nA year later, I was at Borland US in Scotts Valley, California, working at the Delphi PSO department and later contributing to Delphi R&amp;D, as we were working on the upcoming Delphi 5. I then moved back to Paris, in order to finish my Masters in Computer Science, intending then to get back to the US once done.\n1999–2002: Getting into Java\nBut 4 months later, in May 1999, I got wilfully recruited to a “dream-team” of 3 that was supposed to write, from scratch, a full-blown ISP portal that would serve as subscribers’ homepage, aggregating data from well over 20 sources in a hodge-podge of formats to produce its contents, and it all had to be in J2EE. Which, back then, had a huge “Alpha: Preliminary Draft” slapped across all its specs, a horrendously slow and buggy JVM, and only one sort-of-server-that-crashed-once-a-day called Java Web Server. Oh, and yes, this had to be done in three weeks.\nBack in 1999, we still thought Java and XML were actually good ideas.\nYou have to remember, back in 1999, we still thought Java and XML were actually good ideas. The web server side was C, FastCGI and Perl back then, and none of the mammoth over-architecture trends that later ruined that ecosystem was there to deter us yet, so we had mitigating circumstances. We did manage to sneak in a Scheme interpreter, though.\nAnd we pulled through. On June 15, 1999, at 10:",
		"description": "A look back on Christophe’s 20 years so far of professional IT career.",
		"date": 1452902400,
		"image": "/assets/images/art-vid/art-20-years.jpg",
    "_tags": ["post"],
		"title": "20 years.",
		"url": "https://delicious-insights.com/en/posts/20-years/",
		"locale": "en",
		"readingTime": "7 min"
	},	{
		"content": "Now we’re talking.\nA couple days ago, the Node.js Foundation released its first-ever Node.js User Survey Report. It is chock-full of interesting data points. Here’s what peaked my interest most:\n\n3.5 million Node.js users. We’re definitely not in Kansas anymore. Node is not a fringe tech, or even in the small-kids playground, at all. It’s right up there with leading enterprise techs.\nNode.js throughout the stack. 62% respondents use Node.js and its ecosystem (modules, npm-installed, etc.) for both their backend and frontend code.\nNode LTS sees strong uptake. 45% BigCos using Node have upgraded already to the v4 series, and 80% of the remainder plan to do so this year.\nJS drives the IoT world. 96% of the IoT respondents use JS/Node for development. There’s just no other widespread contendant, especially now that Microsoft started maintaining a Chakra Core-based variant for even smaller-capacities devices.\nIn other languages involved in projects the respondents work on, Java, .NET and PHP see planned usage decrease, whilst Python and C++ are expected to rise a bit.\nContainers = BFFs. Over 45% of Node.js users rely on Docker for their development environment. This goes up to 58% in the IoT segment.\n\nAs always, YMMV. The survey “only” had 1,760 respondents, which still covered a lot of production areas and use cases. But I sure like what I see there.\nIf you’re into infographics, here’s the big one.\n",
		"description": "A couple days ago, the Node.js Foundation released its first-ever Node.js User Survey Report. It is chock-full of interesting data points. Here’s what peaked my interest most.",
		"date": 1460678400,
		"image": "/assets/images/art-vid/talk-node-everywhere.jpg",
    "_tags": ["post","node"],
		"title": "Key figures from the Node.js Foundation user survey",
		"url": "https://delicious-insights.com/en/posts/node-js-key-figures-2016/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": "It’s here!\nLast December, we had told you about the release of the first-parter in our GitHub training video series: Learning GitHub. We had then hinted at the second part, aimed at topics even more advanced.\nWell, here it is: Mastering GitHub is out, 5+ more hours of amazing content that covers intricate areas:\n\nAll about GitHub Pages. Not just the fire-and-forget generator approach, but all the way to full-on manual Jekyll usage, complete with plugins, metadata, etc.\nRemember that GitHub Pages are not just a great way to put up online docs about your project (perhaps through a GitBook), but also offer fast, CDN’d HTTPS-capable static hosting, which is great for live tech demos of stuff like Service Workers and other “privileged APIs.”\nWizard Tricks explores secret features accessible only through URL tweaking, keyboard shortcuts, Gist local cloning, and more. Plenty of power lurks under the surface!\nWe showcase integration with external services through multiple live demos of registering at, and hooking with, numerous types of services in the GitHub ecosystem, and demonstrating them in use: from advanced issue management to quality monitoring to continuous integration to chat rooms, we cover all typical use cases and unveil a world of opportunities for you and your team!\nGitHub’s API gets a great run-through with comprehensive demos that have us put together a custom continuous integration service (touching on third-party app authentication, Web Hooks, the Pull Requests API and the Statuses API, among other things), plus a quick multi-file Gist creation tool straight from the command-line, and wrap up with a nifty issue-to-pull-request CLI converter.\nFinally, we delve into Advanced Account Management, from the details of billing to the nitty-gritty of GitHub Organizations, their specific workflows, management methods and extra requirements when handling stuff like repo transfer or third-party app authorization, for instance.\n\nWe worked in close touch with GitHub Training to make sure we left nothing unexplored. Sure, GitHub has recently woken from a mild feature slumber and started pushing a lot of new stuff out, but the content remains 100% relevant.\nWe are extremely proud and happy to see the finished product hit market, and you can event get it at 50% off (or more!) on May 3, 2016 for the #DayAgainstDRM!\nShow me the goods, now!\nYou can check it out right now:\n\nOn O’Reilly’s regular website\nOn their dedicated Infinite Skills site\n\nWe hope you’ll love it and look forward to your feedback!\n",
		"description": "Last December, we had told you about the release of the first-parter in our GitHub training video series: Learning GitHub. We had then hinted at the second part, aimed at topics even more advanced.",
		"date": 1462147200,
		"image": "/assets/images/art-vid/art-git-github.jpg",
    "_tags": ["git","announcement","video course","paid"],
		"title": "Mastering GitHub: just released!",
		"url": "https://delicious-insights.com/en/posts/mastering-github/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "The git reset command is a formidable tool unfortunately far too often misunderstood or poorly used. This is too bad, as it opens up a wide range of solutions and tips to optimize our work and workflows.\n\nIn order to best use git reset, you must understand its context. So this piece will start by revising a number of Git fundamentals. If you think you’re solid there, just scroll down to the “So what about Git reset?” heading. But I would advise you read through. You never know…\nResetting lets us tweak our version history and ongoing work. To do this, we must understand:\n\nhow our history gets built;\nhow Git handles our ongoing work;\nhow that works gets archived in our history;\nwhat the mechanisms are to browse / traverse our branches and versions.\n\nFundamentals\nSHA-1’s\nIn Git’s context, a SHA-1 is a technical reference for an object in the Git database. In reset’s context, we mostly care about commits. This is really just a checksum of the commit’s tree and other metadata. If you’re curious, Pro Git has a great section on this.\nHEAD: “You are here”\nHEAD is a pointer, a reference to our current position in terms of history. It states which commit we’re working on top of. It’s a bit like our shadow: it follows us everywhere we go!\n\nBy default, HEAD references the current branch, e.g. master. But we can move it around to any reference or raw SHA-1. Technically it’s just a text file stored in .git/HEAD:\n\nIn turn, .git/refs/heads/master contains its tip commit’s SHA-1. Such a file then contains the commit’s metadata and tree information, which we can introspect using the plumbing command git cat-file:\n\nUsing git reset we move HEAD around as we see fit. Actually, whenever we have an active branch (which is by far the most common use case), the branch itself is repositioned, and HEAD just follows along.\nA word about ORIG_HEAD\nWhen you peeked into your .git directory, you might have seen a file named ORIG_HEAD. It’s related to HEAD, but always contains a raw SHA-1 instead of a named reference.\n\nORIG_HEAD backs up the position of HEAD before a potentially dangerous operation (merge, rebase, etc.). This way, should things go awry, Git will be able to come back to the position before that by doing a git reset --keep ORIG_HEAD.\nHowever, if you encounter an error with the --keep option, usually when there are conflicting files, you can try to use the --merge option instead. Be careful with this option though, because if you have indexed work that you want to keep, Git will scrap that work without asking for confirmation.\nAreas\nYou probably already know that one of Git’s leading benefits is that your work is mostly local: a Git repo has its own local lifecycle, independent of its remote counterpart. This is great for performance, but not just for that.\nThis article focuses on that “local work.” As a complement to what we’re explaining here, we recommend this great interactive cheat sheet.\nGit manages your work through 3 major local areas:\n\nYour working directory\nThe index, or stage\nThe (local) repository\n\nThere are two other areas (the stash, and the remote) but they’re largely irrelevant to the current discussion.\n\nThe working directory\nIt is the complete set of directories, subdirectories and files you’re working with for a particular project, at the root of which you normally have your .git directory, as a result of having called git init there.\nThe stage\nThis truly is the staging area for your next commit: this is where you put snapshots of whatever parts of your ongoing work you’re greenlighting for the next commit.\nYou add stuff to the stage through the git add command.\nThis area is known by many names: index (mostly in technical docs of Git), stage, staging area, staged files, cache (hence the legacy --cached options to commands such as git diff and git rm)… We favor stage.\nThe index name is most apparent in the name of the technical file that holds its current list of known files and trees: .git/index. You can see what’s in there in many ways, for instance through the plumbing command git ls-files --stage, which displays it as a tree (it is a tree, Git-wise):\n\nIn short, the stage contains all necessary info for Git to create a commit, including a merge commit.\nThe (local) repository\nThis is all the metadata related to your versioned work: commits, references, local change history, configuration… It’s sort of like an archive room where everything you send is neatly compressed, labeled and stored in a way that makes retrieval as fast as possible whilst still optimizing storage.\nSending stuff in there is what git commit does.\nYou’re free to shuffle this around until you send a copy out to your remote repo, using git push. Even after that, you might want to tweak your local repo, but that’s not the point of this article.\nAreas redux\nImagine your Git repo as a photo album.\nThe working directory is your camera, the venue you’re shooting at, your lighting and the subject of upcoming photos.\nThe stage is a list of snapsho",
		"description": "The git reset command is a formidable tool unfortunately far too often misunderstood or poorly used. This is too bad, as it opens up a wide range of solutions and tips to optimize our work and workflows.",
		"date": 1462924800,
		"image": "/assets/images/art-vid/art-git-reset.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Mastering Git Reset: Commit Alchemy",
		"url": "https://delicious-insights.com/en/posts/git-reset/",
		"locale": "en",
		"readingTime": "13 min"
	},	{
		"content": "Beyond Git commands and standard revisions cycle, we can use hooks around specific Git commands to help users automate daily tasks.\nThis article complements the official documentation and the the manual page.\n\nPrinciples\nGit hooks let you trigger scripts (Bash, Node.js, Perl, Python, PHP…) around existing commands. Using these, we can automate some of the user-side work to make it more reliable (and some server-side work, too.).\n\nBy default you’ll find these hooks in each project in the .git/hooks directory.\nThey follow naming conventions &amp; must be executable (chmod +x .git/hooks/…).\nBecause of their location they can be deleted or disabled by the user.\n\nTherefore user-side hooks are more of an optional safeguard than an absolute barrier.\nAlos note that users can circumvent a few hooks with using --no-verify option (available only for pre-commit and commit-msg hooks).\nOn each project initialization Git injects a collection of sample hooks, marked with the .sample file extension (e.g. .git/hooks/pre-commit.sample). Remove that extension to enable them (e.g. .git/hooks/pre-commit).\nBlocking … or not!\nA hook can be blocking. That means it can stop the command he’s linked to.\nBy convention, every hook run before its associated command blocks (hooks named pre-[command]). Still, two hooks can be bypassed by using the --no-verify CLI option: pre-commit and commit-msg.\nAlways-blocking/non-bypassable hooks are:\n\non the client/developer side : prepare-commit-msg, pre-rebase, pre-apply-patch, pre-push, pre-auto-gc;\non the server side: pre-receive, update.\n\nGit knows whether to stop or continue by looking at the script’s exit code. Standard exit codes are expected. These boil down to:\n\n0 (zero): everything’s fine, keep going;\n≥1: an error occured, abort the current Git operation.\n\nFor instance if we use a pre-commit script that ends with exit 1, then our commit won’t be created/completed.\nA real-world use case\nWe’ve got a script that stops each commit as long as the relevant files…\n\nretain conflict markers;\ncontain instances of: TODO or FIXME.\n\n\nClient-side hooks\nThese hooks are only available and triggered on the user side/machine.\nThey’re not shared within a project but we can achieve this in several ways:\nInitializing a project using a template\nWhen you’re initializing a Git repository, you can tell Git that you’d like to use a project template using the --template=&lt;template directory&gt; CLI option:\n\nwhen creating the project: git init --template=&lt;template directory&gt;;\nwhen cloning a project: git clone --template=&lt;template directory&gt;.\n\nThis lets you manage multiple project templates and load the one you like on clone or init.\nUsing an external hooks directory\nSince Git 2.9 we can set a global or local configuration setting to tell Git where our hooks live: core.hooksPath=&lt;hooks directory&gt;.\nWe can then manage a dedicated Git project for hooks that we’ll be able to share and enhance. This is useful for reducing errors and maintenance. We don’t have to copy/paste our hooks anymore from project to project, machine to machine!\nAvailable hooks\n\nAround commits:\n\npre-commit: before commit creation, even before message editing (e.g. linting, unit tests);\nprepare-commit-msg: before commit creation, when everything’s ready to start editing the message (e.g. pre-calculated message injection);\ncommit-msg: before commit creation, but after message editing (e.g. message content control and override);\npost-commit: when the commit is done (e.g. notification);\n\n\nAround patches (git am):\n\napplypatch-msg: before the patch (e.g. check patch message);\npre-applypatch: after the patch is applied, but before the commit is created (e.g. patch content validation);\npost-applypatch: when the path is applied and the commit is done (e.g. notify patch author);\n\n\nOther actions:\n\npre-rebase: before starting git rebase (e.g. stop rebase of master branch);\npost-checkout: after git checkout execution, for instance on rebase or at the end of a git clone (e.g. setting up a branch-associated configuration);\npost-merge: after a successful git merge (e.g. check if there are conflict markers left after a “bad” merge);\npost-rewrite: called by “rewriting” commands (git commit --amend, git rebase);\npre-auto-gc: on garbage collection (e.g. stop references clean-up if we have to use old ones);\npre-push: just before pushing revisions and objects to a remote repository (e.g. running unit tests and stop push if they they fail).\n\n\n\nServer-side\nWhen using SaaS like GitHub, GitLab or BitBucket, you can’t manually manage your server-side hooks. You’ll have to use their APIs or plugins.\nOtherwise, if you’re hosting your own remote repositories or have access to the server, you just have to put your scripts on the server as you’d like.\nAvailable server hooks\n\npre-receive: before receiving references/objects (e.g. check user rights on a project);\nupdate: before receiving references/objects on a branch (e.g. check user rights on a specific branch);\n",
		"description": "Improve quality and reduce stress with tasks automation.",
		"date": 1492128000,
		"image": "/assets/images/art-vid/art-git-hooks.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Addicted to hooks",
		"url": "https://delicious-insights.com/en/posts/git-hooks/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "360° JS is kind of our best-seller, a brand unto itself. It is the single unique course we schedule for multi-client sessions every month! And yet, the time has come to make the jump and rebrand it, dealing with the related risks. Why is that?\n\nA bit of history…\nHistorically, our “360°” training line expresses their ambition of a 360° vision of their ecosystem. In the early days we had single-day, more narrow courses: Powerful JS for the language itself, Guru JS for single-page apps (using Backbone and Underscore back then), Shielded JS on tests and industrialization… When we drafted the “Christmas list” course that aimed, over 4 days, to cover all that and more, it only seemed natural to name it “360° JS”.\nOur Git series saw a similar evolution, with “Daily Git” and “Advanced Git” merging and expanding into “360° Git”.\nNote that we regarded as wishful the idea that 4 days would cover enough of the Node.js ecosystem that we could dare name it “360° Node,” so it remained just “Node.js” this time around 😉\nIssues surface\nThe “360° JS” name wasn’t without problems, which made themselves apparent over the years.\n\nSome people were misled into thinking the training was about the language alone, and all of it, and turned out disappointed.\nAs a corollary, the name didn’t make it clear what the training truly was about and aimed to provide: all the necessary skills to build state-of-the-art front-end web apps.\nA training that actually is about 100% of the language is in the pipes, that will be named “360° ES,” so there would be a massive name confusion there.\n\nWe looked long and hard for a better name, fully aware that at this stage lots of people knew the 360° JS “brand” and were looking it up to find us, meaning we’d have to manage the transition carefully.\nThe new name\nWe settled on something that isn’t quite love-at-first-sight; the thing is, the requirements for that name are quite heavy: reasonably short, representing the scope well, not misleading, low risk of conflict with other trainings (current or planned)…\nIn the end, Modern Web Apps won. Had the content been more focused on Progressive Web Apps, we likely would have used that, but we felt that not covering stuff like App Shells and Page Shells, to name only these, was a no-go for such an opportunistic road.\nSo there it is: no more 360° JS, it is now Modern Web Apps. Long live that new name!\n",
		"description": "Our “360° JS” training rebrands as “Modern Web Apps”: why?",
		"date": 1506124800,
		"image": "/assets/images/art-vid/art-jst-wam.jpg",
    "_tags": ["training","announcement","js"],
		"title": "360° JS becomes Modern Web Apps",
		"url": "https://delicious-insights.com/en/posts/360js-becomes-mwa/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "You pined and whined and begged for it, here they are, finally: our two new training courses, 360° ES and Webpack ! Here’s a quick tour…\n\n360° ES: for aspiring JS gurus\nOver the past 2 years, our JS trainings opened with a first day dedicated to bringing everyone up to speed on the best parts of ES2015 (“ES6”), but obviously there are limits to what can be covered in just a few hours.\nTime and again, someone came up to us wishing for a course that would cover, in great detail, 100% of the language, naturally updated for the latest generation: ES2017, ES2018, etc. Yes, complete with the hidden subtleties, obscure features and super-advanced items, there was a definite need out there!\nAs for us, considering how much we love JS, especially this type of hidden gems we hardly ever touch upon during training classes, this kind of plea could not remain unaddressed forever…\nSo here comes 360° ES.\nYou’ll know it all, and you’ll know it now\nWe dredged up all the docs and reference specs on the language, in its latest incarnation (ES2017), and drafted a curriculum that could cover all this, in its deepest recesses. Generators? Obviously. Proxies? You betcha. Well-known symbols? You said it. Tagged template strings? Definitely.\nNothing is left aside, it could double as a pedantic quiz: is undefined a valid Map key? What parts of its API are not available on the weak variant? What are the pros and cons of a return await?\nBack to the future\nAnd we didn’t stop there. At Delicious Insights, we track ECMAScript closely (ECMAScript is the standard commonly referred to as “JavaScript”; that’s why this course is 360° ES), we even contribute to it at our small level. Tons of great new features are in the pipes, often already usable, or at least testable, thanks to JS runtime vendor initiative and Babel transforms.\nSo you’ll also find everything that we know is going to make the next version, plus a few nuggets that will only become 100% offficial later on! Private fields, new RegExp capabilities, Temporal, native observables… That’s a lot of cool new toys!\nAll the details on 360° ES, that JS training that rocks\nWebpack: superpowers\nThere was Grunt, Gulp, Brunch, Broccoli, Browserify, Rollup… Eventually, in the world of bundlers (these tools that take a ton of various development assets and spew out a few well-crafted production files), Webpack now stands as the leading player. Faster, better at optimizing, more customizable, nicer for development… its benefits are legion.\nIsn’t it scary tho?\nStill, Webpack suffered, in the beginning, of docs that were rich but poorly organized and sometimes missing, and its wealth of features was quite intimidating to many, making them feel overwhelmed by obsolete examples and contradictory third-party tutorials.\nTo infinity, and beyond!\nNevertheless, Webpack remains peerless for efficient bundling of front-end web apps, producing finely tuned, tailor-made bundles for each app’s specific set of needs. When doing it right, you can easily keep your configurations from getting unwieldy, even for super complex use cases.\nThis training course starts at zero and provides all the keys to build, step by step, Webpack configurations that remain maintainable, composable and reusable as they grow, from bare-bones use cases to extremely involved situations.\nIt’s just that “360° Webpack” didn’t quite sound right\nBesides, our training course doesn’t stop at optimizing your web app’s boot time, it also focuses on developer experience, by speeding up everything we can: development feedback lookup, incremental build speed, final build time, etc.\nYou’ll even learn how to write your own Webpack loaders and plugins: that’s how comprehensive the curriculum is!\nIn short, a skill set and know-how that is now a must-have for anyone writing rich JS code targeting browsers.\nAll the details on Webpack, the reference training course\nWe’re looking forward to training you!\n",
		"description": "Check out our 2 new courses:: 100% of latest-gen JavaScript and a deep-dive into Webpack…",
		"date": 1506297600,
		"image": "/assets/images/art-vid/art-est-wp.jpg",
    "_tags": ["training","announcement","js","outil"],
		"title": "360° ES and Webpack: 2 new training courses!",
		"url": "https://delicious-insights.com/en/posts/360-es-and-webpack/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Sessions in the second quarter of 2018 are here, especially *2 new Webpack\nsessions, seeing how its inaugural February session sold out in a matter of\ndays!\n\nNew Q2 sessions\nApril\n\n04–06 Apr 2018: Webpack\n10–13 Apr 2018: Modern Web Apps\n\nMay\n\n02–04 May 2018: 360° ES\n16–18 May 2018: 360° Git\n22–25 May 2018: Modern Web Apps\n28–31 May 2018: Node.js\n\nJune\n\n06–08 June 2018: Webpack\n19–22 June 2018: Modern Web Apps\n27–29 June 2018: 360° Git\n\nReminder: Q1 dates\nJanuary\n\n16–19 January 2018: Modern Web Apps\n23–26 January 2018: Node.js\n31 January–02 February 2018: 360° Git\n\nFebruary\n\n07–09 February 2018: 360° ES\n21–23 February 2018: Webpack\n28 February–02 March 2018: 360° Git\n\nMarch\n\n13–16 March 2018: Modern Web Apps\n27–30 March 2018: Node.js\n\nWe look forward to training you!\n",
		"description": "A whole slew of new training dates for Q2 2018: Webpack, 360° ES, Git, Node.js, Modern Web Apps…",
		"date": 1512345600,
		"image": "/assets/images/art-vid/art-est-wp.jpg",
    "_tags": ["training","announcement"],
		"title": "The Q2 2018 sessions are here!",
		"url": "https://delicious-insights.com/en/posts/new-q2-2018-sessions/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": "When I have to share my work on projects, I want to feel confortable and ensure that what I share is clear and optimal.\nA few years ago, when I looked at my VCS history I found it to be sometimes hard to read and analyze. I moved on and tried to make better commits, avoiding the “What the commit?” effect with generic or weird unusable messages like fix stuff.\nLet’s sum this up. When speaking of good commits, what I mean is:\n\nI want my content and code to be optimal;\nthe resulting history has to be precise and meaningful.\n\nBecause I am lazy (like most developers I met in my career, which is not a bad thing 😅), I don’t want to think about this every time I create a commit. I want it to be automated.\nHere comes our savior: Git and its hooks.\n\npre-commit: check and sometimes rewrite parts of my (non-optimal) code/content;\ncommit-msg: check my commit messages.\npre-push: last checks before sharing (pushing to the remote).\n\n\nSetup and share\nSadly Git has no efficient process to share hooks inside a project (despite Git 2.9 and its git config core.hooksPath…).\nWhen scouring the web for better solutions you can find some alternatives. My preferred one is husky (version 7). Because I work mostly on web projects or Node.js scripts I use npm, and husky is an npm module we can install as a dev dependency and share through our package.json file inside our project.\nHow does it work? Husky is a Git hooks wrapper. It means that when installing your project with its dependencies (through npm install), it will “hook Git hooks”, putting its scripts in a .husky/ directory that it points the core.hooksPath local Git setting to.\nAs a bootstrap, we kindly put a small project boilerplate on GitHub for you 😘: deliciousinsights/dev-automation.\nShould you favor setting it up by hand, here you go:\nIn your terminal:\n\nStill in the terminal, depending on the tooling you wish to use, tell husky what scripts to run on which Git actions:\n\nA significant benefit of having your husky scripts right in your project is that they can now call other project-local scripts. Say I have a git-hooks subdirectory; I could then call my script from my husky configuration. Here’s an example for pre-commit:\n\nNote: you don’t have to work with JavaScript to use npm and husky; it’s just a convenient way of making it work everywhere 😁.\nCheck the code before committing\nI’m trying to be a super-hero developer but I still make mistakes. I leave things in that shouldn’t appear in my code. I also forget the conventions I should be using in my projects (for instance: how to format my code).\nWhen asking other developers it appears that I am not the only one facing these problems. Because we are human we are prone to fatigue, distraction, laziness… 😅\nTherefore we’d better find some tools to guide us and fix our mistakes.\nMy second brain (aka Christophe, my boss) already found a wonderful tool for code auto-formatting that works with many languages (JS, CSS, HTML, SCSS, Markdown, JSX…): Prettier. That tool is already configured to work with our VSCode editor. But if VSCode fails to run prettier or if we want to edit some file with another editor, we’d like Prettier to run nonetheless.\nTherefore, Prettier must run on the updated or created code that gets committed, whoever is contributing it to our project. Many npm modules are available for that but only one can process only our staged work (the one that is going to be committed): lint-staged (I used to got with precise-commits, but it’s not maintained anymore).\nIn your terminal:\n\nI still have to check for undesirable content. Once again I looked on the mighty Internet for a suitable tool but nothing matched my needs as I wanted a customizable tool. I ended up building my own 🤘: git-precommit-checks.\nThe goal is to setup some rules to be run on what’s being committed. A rule can match a file pattern (otherwise all the updated/created files are targeted). Then a regex runs on each file content; if a match is found it will print a message on the terminal as an error or a warning. An error is a blocking rule and will therefore stop the commit.\nFor instance I don’t want to leave some console.log in my JS files and I want to prevent failed merge to pass through (no conflict markers left in the code). I also want to be warned when I forget FIXME or TODO keywords, but without stopping my commit.\nIn the terminal:\n\nYou then need to set up git-precommit-checks.config.js which contains your settings and rules. Here’s an example:\n\nHere is an example of what it could look like in your terminal:\n\nEnsure commit messages are well-written\nThis is only possible when you’re using a commit message convention.\nAt Delicious Insights we’re using conventional commits (inspired by the conventional changelog).\nWe only add a small extra: using a text ellipsis … at the end of our messages when there is more then one line for the description (apart from issue reference), that is, when the description has a “body.”\nOnce again, we found a useful m",
		"description": "Enhance your code and your commit messages with Git hooks.",
		"date": 1546473600,
		"image": "/assets/images/art-vid/art-git-hooks-commit.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Enhance your commits with Git hooks!",
		"url": "https://delicious-insights.com/en/posts/git-hooks-commit/",
		"locale": "en",
		"readingTime": "7 min"
	},	{
		"content": "We’ve been meaning to launch our screencasting activity for a long time, in parallel of our in-room trainings. It’s now live, with the very first screencast of what should become a large catalog: Writing Modern Async JavaScript.\n3 hours of video and 40+ code samples: from raw callbacks to debugging to promises to async/await, we cover it all (and in depth).\nIn this article, we’d like to explain how this all came to pass, and give you a sneak peek at what’s next…\n\nWe’ve wanted to do this for a long time\nHistorically, Delicious Insights has always done in-room training. We absolutely want to keep this, as this is the only format that allows such a rich interaction with our trainees, not only so we can teach complex knowledge at high density, but also because this lets us gather priceless, diverse experience feedbacks and hindsights that help us grow our own understanding of the market, our vision of possible requirements, and overall give a richer context for our explanations.\nPlus, it feels so much nicer to meet in-person 😀. We intentionally cap our sessions at 10 trainees, so the human feel and teaching quality can be top-notch. We’ll always do in-room at Delicious Insights, this is part of our DNA.\nThat being said, this strictly in-room format came with its own limitations, in terms of content and market, and its own sources of frustration: having in-room as our sole revenue channel was detrimental to pro-bono work we wanted to spend more time on, such as open-source contributions, articles, conference talks, etc.\nNaturally, tech articles and conference talks help us reach a wider audience, and broach a larger selection of topics. And indeed, we want to free more time for such contributions and for open-source work. But none of this would compensate for the revenue loss that holding fewer sessions would imply.\nIt was thus mandatory that we diversify our revenue channels, an excellent first take being the production of paid digital contents. Among these, screencasts were the clear favorite, as they come with numerous benefits:\n\nWe already have a solid expertise on producing video contents (even if there’s still a lot we can improve on).\nThis is a very flexible format that lends itself both to long (10+ hours) and laser-focused (≤ 1 hour) courses.\nThe production cost doesn’t need massive sales to reach break-even.\nIt’s easy to produce both English and French, allowing us to tap at minimal cost into the worldwide IT market, which is vastly larger to the French-speaking market (and is also less reluctant to pay).\nThis ends up being 100% passive income: once the course is produced and launched, residual cost is negligible (even including support, updates and drip marketing).\nThis makes it easier to collaborate with third-party experts on specific topics, as it requires less physical availability and commitment from them than traditional in-room trainings in our catalog.\n\nTons of topics\nIndeed, topics abound! We enjoy a solid expertise on lots of stuff, such as accessibility, CI/CD, Git, GitHub, GraphQL, JavaScript, Node.js, Web performance, React, Redux, Ruby and Rails, security, advanced terminal / CLI usage, automated tests, VS Code, Webpack…\nThe hard part seems to be, in the end, choosing what topics to tackle first!\nPicking our first course\nFor numerous reasons, our first course had to meet a number of criteria:\n\nIt should not require extra research\nIt should be produced in 3 weeks tops\nIt should be intersectional / universal, in order to have as wide an audience as possible\nIt should bring solutions to actual pain points for the target audience (e.g. widespread misunderstanding / lack of mastery)\nIt should fit in an “medium-size” format (≤ 3hr) in order to allow moderate pricing (≤ €30)\n\nThis resulted in a quick winner that became Writing Modern Async JavaScript.\n\nThis course covers, in great detail across 3 hours of video with 40+ code samples, the following areas:\n\nRaw callbacks and “Node-style” callbacks\nDebugging async code\nPromises\nasync/await\n\nIn order to meet all our criteria, we did choose to exclude three related areas: the async.js library (that is mentioned a few times though), generators (even though they’re not asynchronous per se, they are related) and observables (RxJS style). We’ll probably cover these someday… in other courses 😉.\nIn English and in French\nThe French version shipped first, on Friday, June 14, 2019. Our English screencasts site and the English version for the first course shipped on July 2nd.\nIf you work with international teams and some of your colleagues are more at ease in French, be sure to point them to the French site! (Or the French version of this article.)\nAll our classes will ship in both languages, usually in French first, with English following within 2 to 3 weeks. Pre-sale pages go up simultaneously in both languages though.\nThe platform\nWe pondered distribution channels quite heavily… At a minimum, we wanted to distribute directly, on our own terms, so we didn",
		"description": "Our first screencast is out! 4hr and 40+ code samples for just €29, the go-to top-notch course.",
		"date": 1560470400,
		"image": "/assets/images/art-vid/art-screencast-async.jpg",
    "_tags": ["announcement","video course","paid","js"],
		"title": "Writing Modern Async JS: our new screencast",
		"url": "https://delicious-insights.com/en/posts/async-js-screencast/",
		"locale": "en",
		"readingTime": "5 min"
	},	{
		"content": "Aaaah, this in JavaScript. It’s not that it is actually hairy, it’s more that hardly anybody bothers to actually learn the core concepts behind that thing; as a result, everybody’s cargo-culting their incorrect mental models from their past languages.\nPeople mostly complain they’re “losing their this”. An intringuing corollary is that since our functions are not intrinsically bound to a static this value, we should be able to call any function with an explicit this of our own choosing. And indeed, in JavaScript, this is part of the calling contract for a function, just like its arguments. Which opens a wide array of cool opportunities.\n\nThis is not what it looks like\nWith one single exception, JavaScript does not automatically define this when calling a function. Actually, in strict mode and barring explicit overrides, except for that one situation I just mentioned, this will always be undefined. So there!\nIt doesn’t matter where the function “comes from,” how and where it was declared, etc. In JavaScript, this is defined at call time, not at declaration time.\nSo what is that notable exception? It occurs when a “traditional” function (one declared using the function keyword or the shorthand method notation) is called in a pattern I like to refer to as “Subject, Verb, Complement:”\n\nSubject: an object is used to start the expression\nVerb: we index a property on that object, and the property’s value is our function; it doesn’t matter whether we use direct indexing (the . operator) or indirect indexing (the [] operator).\nComplement: we immediately call the obtained function, on-the-fly within the expression term (using the () operator, surrounding any arguments).\n\nConsider the following code:\n\nNow let’s say I run this:\n\nEverything’s dandy. If we take that expression apart, we do find:\n\nA subject: wife\nA verb: greet\nA “complement:” the on-the-fly call using ()\n\nIn that case and that case only, JavaScript will define (among other things) this in the context of that call (technically, it adds 4 extra entries to the call’s Function Environment Record), using the subject as reference. In our particular code, this will refer to wife, so that when constructing the text, this.name will evaluate to wife.name, hence 'Élodie'.\n“Losing” your this\n100% of other cases boil down to referencing the function without immediately calling it, at least not within the same expression term. Possibilities are endless, such as:\n\nThe most annoying part is when we are in a callback function inside a code context where this was fine: the callback is, by definition, passed without being called on-the-fly; the mechanism that receives it is the one calling it at the appropriate time. And then kaboom!\n\nDefining this: part of the calling contract\nWhen you think about it, in JavaScript, this is part of the invocation context for the function, its “calling contract,” so to speak, just like its arguments (or the infamous arguments) or super.\nDoes that mean we can call a function and explicitly control its this? You betcha!\nConsider the full signature for Array#forEach for instance:\n\nIt seems we can tell forEach what this to use when the time comes and it invokes our callback function. Wonderful!\n\nBut how does forEach achieve that? It only has some identifier that references our callback, without any extra context information. How does it manage to call it with a specific this besides its arguments?\nDefining this when you know the arguments\nIt could use one of the methods available on all functions: call.\nSo yeah, I did just say that functions have methods. In JavaScript, functions are objects too. They’re instances of Function, to be precise. So they feature properties, some of which are data (notably name and length) and some of which are functions (specifically call, apply and bind). Breathe, it’s all right, it’s fine. You’ll get used to it.\nInstead of just writing:\n\n…it could instead go with:\n\nThe call method on a function allows us to call it by first specifying the this it should use, then the regular arguments, if any. Note these are passed individually, so you must know their details and number (what we call the function’s arity) ahead of time.\nBased on that, a naive implementation of forEach could look like the following code. In order not to muddle things further in your mind, we won’t write this as a method intended to be called on arrays (this would be the array, which could add to the confusion), we’ll just make it a regular function that accepts the processed array as its first argument.\n\nDefining this when you don’t know the arguments\nThis is cool, but what if we want to write generic code, that would force this regardless of the argument list? This is, by the way, something the bind method of functions does: it produces a wrapper function around the original one, then calls it when requested with all the passed arguments… and a specific this:\n\nHow does bind manage that?\nIt could use call’s sister function, named apply. It applies t",
		"description": "JavaScript does not intrinsically bind your functions to a specific “this”… but that means JS lets you call them with an explicitly given “this”!  What’s it useful for, and how to go about it?",
		"date": 1582588800,
		"image": "/assets/images/art-vid/art-call-apply.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Call a JavaScript function with an explicit this",
		"url": "https://delicious-insights.com/en/posts/call-and-apply-in-javascript/",
		"locale": "en",
		"readingTime": "6 min"
	},	{
		"content": "Starting May 4th, we’ll launch a series called “19 nuggets of vanilla JS,” with a daily article (not too long, great for nibbling) on a facet of pure JavaScript language; or a protip, best practice, poorly known ability, mythbusting, demystifying, etc. 19 reasons to come back!\n\nWhat’s in there?\nWoah, lots of stuff. Light stuff, heavy stuff, thought-provoking stuff…\n(Edit end of series: the entire list is now available here for easier consumption.)\n\nEfficiently deduplicating an array\nEfficiently extracting a substring\nProperly formatting a number\nArray#splice\nStrings and Unicode\nShort-circuiting nested loops\nInverting two values with destructuring\nEasily stripping “blank values” from an array\nLong live numeric separators!\nProperly sorting texts\nExtracting emojis from a text\nProperly defining optional named parameters\nconst is the new var\nUsing named captures\nObject spread vs. Object.assign\nConverting an object to Map and vice-versa\nThe for-of loop: should there remain only one…\nSimulating an abstract class with new.target\nNegative array indices thanks to proxies\n\nSo keep a sharp eye out for daily releases!\n",
		"description": "Every day a JavaScript nugget, for 19 days!",
		"date": 1587945600,
		"image": "/assets/images/art-vid/js-nuggets.jpg",
    "_tags": ["js","tutoriel"],
		"title": "19 JavaScript nuggets!",
		"url": "https://delicious-insights.com/en/posts/js-nuggets/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": "This post opens our “19 nuggets of vanilla JS” post series, with one daily article (not too long, nibble-size) on a facet of pure JavaScript language; or a protip, best practice, poorly known ability, mythbusting, demystifying, etc. 19 reasons to come back!\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nEfficiently deduplicating an array (this post)\nEfficiently extracting a substring\nProperly formatting a number\n…and beyond! (fear not, all 19 are scheduled already)…\n\n“Deduplicate?”\nDeduplicate: the action of stripping from a list all extraneous occurrences of values, to only retain unique values in the end.\nThe list doesn’t have to be sorted and values may have multiple types. The implementation can choose whether to preserve order of the original values (be “stable”) or not.\nThis is the kind of words non-IT folks seldom use in their daily life.\nOld-school\nCan’t escape traversing the whole list.\nIf we can’t rely on the list being sorted (and its values being Strings), it’s even worse: at every step, we’ll need to traverse the ongoing result to check for a prior encounter of the current value! A traversal in a traversal: the algorithm is quadratic, of complexity O(n²). Roughly speaking, if your array has a thousand items, you’ll do a million turns… A rather naive implementation (especially considering we appear to be using ES2015+) could look like this:\n\nFeeling jarred by for…of or Array#includes()? You’re more the ES3 (pre-2009) type, so here’s a version for you that is actually even slower:\n\nNow, if we can assume items is “sorted” to begin with (identical values are adjacent), we can save a ton of computation by avoiding the inner traversal of result. Our algorithm goes from quadratic to linear (O(n), so a thousand items yield a thousand turns):\n\nStill, not so neat. If all our values are Strings, we could optimize the existence check a bit, but barely, by using a dictionary object and filling up its keys, to check with a seen.hasOwnProperty(item) or a rougher seen[item], but the gain may not be noticeable (altering the “shape” of seen every time we add a newly-encountered key kills most internal lookup optimizations by the JS engine).\n“Yeah, but Lodash!”\nAbsolutely! Lodash has featured for a long time uniq and friends (uniqWith, uniqBy, sortedUniq, etc.). It wasn’t even first to the key, as Prototype.js did that all the way back in 2005.\nLike a boss\nSure, if we have advanced needs (custom comparator, computed keys, etc.) we’ll need Lodash or some other help. But for the common case: deduplicating raw values, the trick, since ES2015, is to use a Set.\nSet is one of two new collection types (with Map) that came out in 2015 in the language’s standard library. You’ll find a similar type in many ecosystems, representing a set in the mathematical sense. In particular, sets have two important characteristics:\n\nThere is no instrinsic order\nAll values are unique\n\nIn practice, undefined, null and NaN are treated here like any other value. Set compares values using the SameValueZero pseudo-algorithm laid out in the JavaScript specification. It’s much like strict equality (the === operator, that first checks types are the same, then compares values), with a teeny-tiny difference: it considers -0, 0 and +0 as identical. I am quite certain you don’t mind 😉\nIt so happens that a Set can be built based on any iterable. There are quite a few kinds of iterables (making our code even more generic and useful!) but Array is undoubtedly the most well-known kind.\nSo we could go:\n\nBuilding a Set by passing it an iterable is sort of the optimized version of manually traversing that iterable (say, with for…of) and calling add(…) every time. The semantics are the same, it’s just faster. As a nice bonus, the order of insertion is preserved, making our algorithm “stable” that way.\nThe definition of add(…) mandates that it ignores any argument already present in the Set. Internally, it is implemented using data structures optimized for storing and verifying the existence of values, regardless of any ordering. This is why set.has(x) usually runs with complexity O(log(n)), which is potentially much more performant than arr.includes(x), which runs in O(n). What would you prefer, 3–4 turns or 1,000 turns?\nOK, but now that we have a Set, not a nice Array chock-full of cool handy methods… How do we land back on an array?\nA Set is an iterable too, which means we can spread it using the ... operator, or go with the Array.from(…) method. So to get an Array back, we could write this:\n\nSpreading inside an array literal ([…]) produces, well… an actual Array. We consume the entire iterable, which means an O(n) run, for a total cost around O(2n log(n)), which is very, very much better than O(n²). Here’s a small comparative table, assuming 10ns for the SameValueZero comparison, which is quite reasonable:\n\n\n\nSize\nO(n²)\nO(2n log(n))\n\n\n\n\n100\n100µs\n4µs\n\n\n1,000\n10ms\n60µs\n\n\n10,000\n1s\n800µs\n\n\n100,000\n1m40s\n10ms\n\n\n1,000,000\n2h46m40s\n120ms\n\n\n\n",
		"description": "Discover the best way to deduplicate an array since ES2015…",
		"date": 1588550400,
		"image": "/assets/images/art-vid/js-nugget-1.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Efficiently deduplicating an array",
		"url": "https://delicious-insights.com/en/posts/js-array-deduplication/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "Here is the second article of our daily series: “19 nuggets of vanilla JS.” This time we’ll talk about extracting a part of a string, and see there are no less than 3 ways to go about it… but only one should stick with you 😉\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nEfficiently deduplicating an array\nEfficiently extracting a substring (this post)\nProperly formatting a number\nArray#splice\n…and beyond! (fear not, all 19 are scheduled already)…\n\nThe Ugly: substr(…)\nDid you now? Strings have a substr method. You didn’t know? Good for you! It can’t be trusted and is not even handy.\n\nIt’s not quite official. It is in annex B of the spec, which despite being “normative” since ES2015 instead of “informative” earlier, is about the parts of the language and its standard library that were never quite clean and have been actively discouraged, sometimes for a long time (as for substr, it was frowned upon ever since ES3, that’s 1999, folks).\nIt has an unusual signature: substr(index, length). Yes, length. Not two indices, but one index and one length.\nIt has incompatible implementations. In particular, although it explicitly allows negative indices to start from the end (which is good!), this facet doesn’t work in JScript, the JS engine in Internet Explorer pre-9.0.\n\n\nIt also sports a lousy name, truncated haphazardly, which reminds me of the dark early days of PHP (nl2br, yes, I’m looking at you—and many others).\nSo throw this method to the trash.\nThe Bad: substring(…)\nMany fine folks use substring. Many folks indeed. Way too many folks. It’s kinda like this !@# parseInt: everybody thinks that yeah, okay, I got this. Then right when you do your most critical deployment ever, bam! The hidden bug. The caveat. The pitfall.\nThe name is clear though, I’ll give it that. And arguments are indices, which is cool.\nBUT—!\n\nIndices can’t be negative (no end-of-string confort there)\nThere’s a Nasty Joke™ if the second argument is less than the first.\n\n\nGuessed it? Yup, if the second index is less than the first, they get inverted! What could go wrong?! Sure, it has to be exactly what we intended, just like new Date(2020, 0, -6) lands on Christmas 2019, that makes perfect sense!\nThank you, next!\nThe Good: slice(…)\nHere’s our good friend at last! You probably know slice from arrays, well it’s also available on strings, and the API is exactly the same, which is nifty: there are more than enough APIs to remember, so when we can reuse one… Many good things to say, then:\n\n100% API-compatible with the slice from Array\nTwo indices, both allowing negative values (and as usual, the second one is exclusive)\nNo weird-ass inversion if the second one is less than the first one\n\nGotta love it! 😍\nThere are two more niceties, that it does share with the two prior candidates so they’re not exactly benefits, but I’ll list them anyway:\n\nOmit the second index: go to the end of the string\nOmit even the first index: grab the whole string\n\n\n\n“Yeah but that doesn’t do kawaii!”\nAs you no doubt have gathered, slice is my friend. Still, like all traditional String APIs, it often stumbles on Unicode. We’ll circle back to this soon (spoiler alert) but JS strings are, much like Java’s (argh!) encoded as UCS-2 / UTF-16LE, and what the API incorrectly refers to as characters (charAt, charCodeAt, etc.) are actually 16-bit (2-byte) code units. This is plenty for latin characters, digits and the usual Western punctuation, but the moment we reach a certain range of Unicode codepoints, say Chinese ideograms, Japanese kanjis or straight-up emojis, things start falling apart and we need a surrogate pair:\n\nYup, '😍' actually holds two code units. Normalized as ASCII source, we’d need to write '\\ud83d\\ude0d'. Lovely, right? One emoji, but a string of “length” 2. One codepoint, two code units making up a surrogate pair. Hence:\n\nSo how can we extract a segment “in a codepoint sense?” If we really need to, we can seize the fact that since ES2015, strings are iterable by codepoints, not by code units. Turn them into an array of codepoints, slice that array and rebuild the string from it:\n\nPfew! That still won’t handle codepoint combinations based on ZWJs (Zero-Width Joiners), so we’re not always in the clear…\n\n…but still, with a bit of luck we can reunite the whole family:\n\nI love a happy ending.\n",
		"description": "Splitting strings in JS? Thow away substring and forget substr! Your salvation lies with slice!",
		"date": 1588636800,
		"image": "/assets/images/art-vid/js-nugget-2.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Efficiently extracting a substring",
		"url": "https://delicious-insights.com/en/posts/js-string-slice/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Welcome to the third article of our daily series: “19 nuggets of vanilla JS.” This time we’ll talk about formating numbers, and see that we have amazing native capabilities!\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nEfficiently deduplicating an array\nEfficiently extracting a substring\nProperly formatting a number (this post)\nArray#splice\nStrings and Unicode\n…and beyond! (fear not, all 19 are scheduled already)…\n\nWhat do you mean, “formating”?\nThis can mean any number of things. I suggest we consider three major types of needs here:\n\nA fixed number of fractional digits (technical display)\nA change of numeric basis, or radix (same)\nA human representation anchored in a linguistic context (locale). It could be a plain number, a currency value, a percentage, disk usage… there’s really no shortage of use cases.\n\nFor the first two use cases, we’ve had solutions forever, but considering nobody reads the docs and many people have a hard time understanding that in JavaScript, even primitive number values can act as objects (“autoboxing”), these solutions tend to go unnoticed.\nThe hidden old-timers\nThe Number type comes with a number (ah ah) of instance methods, two of which are super useful here and often needlessly re-implemented.\nCareful! Unlike many more permissive grammars such as Ruby’s, JavaScript’s does not allow direct indexing through the dot (.) operator on an integer literal: any dot after such as literal will be regarded as the decimal separator that follows the integer part:\n\nIn practice this is hardly an issue: when we have the literal value, we might as well have its literal formated representation instead of computing it! In general, the number is referenced through an identifier, mooting the point:\n\ntoFixed()\nWe often need to format a numeric display using fixed fractional, for instance for alignment purposes. As many people have no idea this can be done natively, we often stumble on shamble implementations such as this one:\n\nThere’s been a native solution ever since ES3 (1999)!\n\nBy the way, do not mistake this for toPrecision(…), that is about the total number of significant digits (in the integer plus fractional parts).\nCool beans, but this still results in a technical literal, with no locale-aware formating (thousands grouping, decimal separator)… Now, if this is enough, then cool, but sometimes you’ll need more (we’ll get to that shortly).\ntoString(radix)\nAnother common need in a technical context is to choose a display radix. You know: octal, hexadecimal, binary… This is likely geared towards a technical format instead of a display to end-users, but who knows.\nHere again, we find a ton of hand-rolled solutions online, despite a native solution for radix 2 to 36 (yes, 36) being available since JS 1.1 (1.1, dang! In 1996, my friend! Were you even born?!). Like all other objects, Number instances feature a toString() instance method; but unlike most objects, it accepts an argument: an optional radix, which defaults to 10.\n\nTadaaaa!\n“I have the power!” — Intl\n(Bonus points if you get that reference.)\nWhen we need “cleaner” formating, geared toward Real People™ with a linguistic context (which is part of display localization, or L10n), we long had to break out the big guns with Moment.js or other modules with a big fat localization corpus (around 1MB, quite the bundle!).\nNowadays we can lighten things up with solutions such as Format.JS, which is nice but is really just sugar-coating on top of a native API JavaScript engines have provided for quite a while now: the Intl namespace.\nECMA-402\nThis part of JavaScript’s “standard library” has its own standard: ECMA-402, which is driven by the same technical committee (TC39) as ECMA-262, the standard for JavaScript itself.\nThe idea is to let our JS code access the enormous corpus of formating rules, which can get pretty intricate from one language to the next, related to numbers and dates. There are a huge number of cases, variations, fine print… It is all known as the CLDR (Common Locale Data Repository), which is part of the ICU (International Components for Unicode), and can usually be found among the libraries of our OS, maintained multiple times a year.\nIn our situation, we’re mostly interested by the Intl.NumberFormat class, that lets us create extremely detailed and versatile numeric formaters.\nnew Intl.NumberFormat(…).format(n) vs. n.toLocaleString(…)\nWith ES5.1 (2010), the legacy toLocaleString() instance method on Number got expanded. It used to not accept any argument and just return the default format for the active locale; it then started accepting all the options of new Intl.NumberFormat(…).\nIf we just need a one-shot format, this constitutes a neat shortcut:\n\nBut… if we reuse the same format over and over (e.g. within a long loop, or in response to a frequent event such as mousemove), we’ll be better off instanciating the formatter only once, and reusing it from then on:\n\nIn the remainder of this post, I’ll mostly use t",
		"description": "Fine-tuning number formating with JS? Easy as pie!",
		"date": 1588723200,
		"image": "/assets/images/art-vid/js-nugget-3.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Properly formating a number",
		"url": "https://delicious-insights.com/en/posts/js-number-formatting/",
		"locale": "en",
		"readingTime": "6 min"
	},	{
		"content": "Here comes the fourth article of our daily series: “19 nuggets of vanilla JS.” Today we’ll dive into an array method that’s been around almost forever (JS 1.2, 1997): Array#splice(…), a true Swiss-army knife of array tweaking.\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nEfficiently deduplicating an array\nEfficiently extracting a substring\nProperly formatting a number\nArray#splice (this post)\nStrings and Unicode\nShort-circuiting nested loops\n…and beyond! (fear not, all 19 are scheduled already)…\n\n\nsplice or slice?!\nDon’t mistake one for the other.\n\nslice([from[, until]]) is an idempotent / immutable operation: it doesn’t alter the original array but returns a new array based on the provided range.\nsplice(from[, count[, item…]]) is a mutable operation, altering the original array by “replacing” a given segment with a new one. Sort of like RNA splicing (indeed, “splicing” means that sort of thing when you talk about genes, celluloid films or electrical wiring, to name a few). In short, we’re cut-and-pasting. It is the CRISPR of JavaScript arrays.\n\nAlso note that although slice has an identical twin on the String type, splice doesn’t (because Strings are immutable).\nFor removing items\nThe “true” name of the second argument is deleteCount. That says something! If you just go with two arguments (or even just one), splice removes items, starting at position from (first argument), which allows negative indices (like slice).\n\nWith a second, positive argument, it removes that number of items (stopping at the end of the array if it happens to be too short).\nWithout a second argument, it removes the remainder of the array.\n\nIn all scenarios, splice returns the removed segment, always as an array, regardless of whether there were zero, one or more items in it.\n\nFor replacing items\nYou can also provide, as individual arguments, items to insert as a replacement of those removed. What’s interesting is that there can be as many as you like: their number doesn’t have to match the amount of removed items!\n\nWhat if the items you wish to replace with are available as an array? No worries, that’s what the spread operator is for:\n\nFor inserting items\nAstute reader, you probably realized that replacing items is a generic case:\n\nTo remove stuff, just replace by… nothing.\nTo insert stuff, just don’t remove (replace) anything first.\n\nIndeed, if count is zero but you do provide “replacers,” you end up—you guessed it—inserting.\n\nNot on typed arrays…\nYou’re fresh, hip and rockin’ typed arrays? These have a fixed size, so splice is one of the few methods from Array you won’t find there.\n…but works on array-likes\nAs do all Array methods, splice does not require it be called on a “true” array: the binding needs just be an “array-like,” which means it features a non-negative integer length property and numeric properties between 0 and length - 1.\nI don’t believe you’d ever need this, but it’s still quirky-fun:\n\nBonus round: copyWithin\nA fairly frequent tweaking use case is about copying part of an array elsewhere… in the same array. Internal copy-pasting, so to speak.\n\nWe could achieve that with splice, assuming an extra slice:\n\nBut that blows… Since ES2015, we’ve had copyWithin for that scenario. The signature is copyWithin(to[, from[, until]]). Careful, to needs to be within the array’s boundaries (less than length), otherwise nothing happens.\n\nIf that’s your need, then it can’t be beaten performance-wise, so there’s your freebie.\nWant to dive deeper?\nOur trainings are amazeballs, be they in-room or remote online, multi-client or in-house just for your company!\n",
		"description": "Do you *really* know Array#splice, the Swiss-army knife of JavaScript array tweaking?",
		"date": 1588809600,
		"image": "/assets/images/art-vid/js-nugget-4.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Array#splice, the Swiss-army knife",
		"url": "https://delicious-insights.com/en/posts/array-splice/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Time for the fifth article of our daily series: “19 nuggets of vanilla JS.” We’ll explore the rich and complex relationship between JavaScript strings and Unicode; because the time of ISO-Latin-15 / Windows-1252 is long gone, folks…\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nProperly formatting a number\nArray#splice\nStrings and Unicode (this post)\nShort-circuiting nested loops\nInverting two values with destructuring\n…and beyond! (fear not, all 19 are scheduled already)…\n\nA complex relationship…\nWe’re told that the String type has always “been Unicode.” After all we can indeed put any Unicode-defined glyph in there, it works. But in practice, the encoding they went with, along with the misleading terminology of the API, have some issues.\nIncidentally, almost everything we’ll discuss also applies to Java’s String, because both were designed at the same time in a quasi-identical way. (But JavaScript doesn’t look anything like Java, don’t go pretending I ever said such a thing!)\nUTF-16\nJavaScript strings are encoded using UTF-16 (or UCS-2, depending on the implementation; it’s a distinction without much of a difference). Every string position therefore refers to 16 bits of data, or 2 bytes. This is indeed enough to encode most Unicode codepoints in the U+0000 to U+FFFF range, but not beyond (despite there being a truckload beyond, in practice adding up to around 144,000 glyphs). This 16-bit block is called a code unit.\nFor instance, Emojis, along with many lesser-known or ancient alphabets (such as ugaritic or phenician) and graphical sets (Mahjongg tiles, dominos, cards…) lie beyond the 16-bit range, so they require using two combined values, each of which is invalid when standalone: the combo is called a surrogate pair. Sure, this pertains mostly to extended graphical glyphs and extinct languages (like, extinct), but still.\nCharacters, codepoints, code units and surrogate pairs\nFor most people, a “character” is a full-blown entity; a “cell” in the Unicode table, so to speak, that is actually called a codepoint.\nThis analogy only goes so far, as many codepoints do not represent a full character, but more discrete, technical elements of it that can be invisible (such as the hyphenation point or the zero-width joiner) or diacritical signs (e.g. the acute accent).\nStill, for practical purposes, a chinese ideogram, a georgian digit, a babylonian pictogram or an emoji are all things we perceive as “a character” when looking at a text.\nYet in practice, because of the UTF-16 / UCS-2 encoding, many codepoints require a surrogate pair, hence two code units, which through the API of JavaScript’s String means “two chars.” In fact, a length of 2.\nYou read that right! charAt returns a code unit, not a codepoint. Same for charCodeAt, based on code units, or length, that gives the number of code units. Basically, almost the whole API of String is based on code units. Check this out:\n\nDo notice the escape sequence \\uXXXX we’ve always been able to use in String literals (or in CSS): it allows 4 hexadecimal digits, covering only 16 bits, so a code unit, not a codepoint! How do you drop a non-literal Emoji then? You need to grab its codepoint, convert it to a surrogate pair and type it all: '\\ud83d\\ude05' === '😅'.\nQuite the clusterfuck, wouldn’t you say?\nWell, there are things that work fine, like toLocaleUpperCase(), localeCompare(…) and their friends, that do know about encoding (and are barely impacted), so there’s that…\nES2015: literal codepoints\nES2015 (formerly known as “ES6”) brought a lot to the table when it comes to Unicode.\nLet’s start with escape sequences: having to compute the surrogate pair when exceeding the first 16 bits was super annoying, although getting the codepoint was easy enough.\nWe thus get a new escape sequence for Unicode, using curly braces: \\u{…}. It accepts the whole codepoint, making it instantly more dev-friendly.\n\nI myself favor dropping the literal character unless it’s invisible (such as '\\u202f', the narrow no-break space used a group delimiter or unit separator in the French number formatting, or a true invisible character: the soft hyphen \\u00ad, that represents a hyphenation point that, when activated, results in a hyphen mark). For such use cases, an explicit escape sequence is easily identified when browsing the code.\nES2015: codepoint-based APIs\nWe also get three new String methods pertaining to Unicode.\ncodePointAt(index)\nIt’s just like charCodeAt(index), except that when the given position (expressed as code units) lands on the beginning of a surrogate pair (referred to as a lead surrogate or high surrogate), instead of just returning that, it’ll grab the rest of the codepoint from the following code unit (the trail surrogate or low surrogate).\n\nString.fromCodePoint(…)\nIt’s akin to String.fromCharCode(…), but accepts codepoints instead of code units.\n\nnormalize([form])\nWhen exploring Unicode, you usually stumble upon the concept of normalization, along with notio",
		"description": "Everything you should know to correctly process advanced Unicode codepoints in JavaScript strings.",
		"date": 1588896000,
		"image": "/assets/images/art-vid/js-nugget-5.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Strings and Unicode in JavaScript",
		"url": "https://delicious-insights.com/en/posts/js-strings-unicode/",
		"locale": "en",
		"readingTime": "5 min"
	},	{
		"content": "Welcome to the sixth post in our daily series “19 nuggets of vanilla JS.” This time around we’re looking harder at an old ability of the language, that you should only use after much deliberation tho: statement labels.\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nArray#splice\nStrings and Unicode\nShort-circuiting nested loops (this post)\nInverting two values with destructuring\nEasily stripping “blank values” from an array\n…and beyond! (fear not, all 19 are scheduled already)…\n\nIt’s been a while\nEver since JS 1.2 (1997), it’s been possible to label statements. This has been mostly used for loops: classical for, for…in, while and do…while (and since ES2015, for…of). The idea is to allow in-depth short-circuiting, usually through nested loops.\n(You’ll get further details from the always-amazing MDN docs, especially about labelled blocks when you have multiple successive blocks making return impractical but wrapping all the remaining scope in an if is undesirable.)\nShort-circuiting with break\nWhen we face nested loops, it is often desirable to be able to short-circuit multiple levels at once. Let’s say you’re looking for a value in a 2-dimension matrix; the moment you find it, you want to exit the inner loop (iterating the columns of the row) and the outer loop (iterating the rows of the matrix).\nWithout labels, this can be a bit kludgy:\n\nThe secret (which, like most secrets, can be discovered by reading the docs, dammit!) is that we can label loops, and use any “active” label as an operand of break. The text of the label is entirely up to you, it could even be an active identifier (but why would you hate readability that much?!). Common candidates are outer or top for the outermost loop. The previous code would become something like this:\n\nThat’s better already, isn’t it? One favorite example of mine, that you can find in the MDN, is about an array of predicates (truth tests) and a series of values, and we try to figure out whether all values pass all predicates. As you may have guessed, the moment one test fails, we want to drop the whole thing:\n\nI like it 😊\nShort-circuiting with continue\nYou probably realized we can also use this with continue, in order not to just skip to the next turn of the current loop, but to the next turn of a surrounding loop!\nAs a variation on the previous example, let’s say we want to get all the values that pass all the tests. The moment a test fails, there’s no point keeping on with the current run of the outer loop, we can skip right to the next one, and restart our inner loop (and any extra in-outer-loop code) from there.\n\nNeat. (And yes, we could have turned this code around and used tests.every(…), but that’s not the point.)\nThe trap of disguised labels\nEver since arrow functions showed up in ES2015, we’ve seen a rebirth of labels… by mistake!\nLet’s say that, for the sake of compatibility with a third-party API, we need to turn a list of numbers into a list of objects with that number as a value property. We might be tempted to write this:\n\nGotcha! We end up with an array of 9 undefined. Classy. This is because we’ve grown comfy with the shorthand notation of arrow functions just returning a value (which is good!) but forgot that curly braces have variable semantics.\n\nYour callback function evaluates n, doesn’t do squat with it and returns nothing (i.e. returns undefined).\nThis is a common trap when writing short arrow functions that need to return an object literal: you need to ensure that curly braces carry object literal semantics. Here, by default, they represent a function block.\nFor curly braces to mean an object literal, they need to appear in our code at a spot where JS grammar mandates an expression. The simplest way to trigger that grammatical context without altering code semantics is to surround the curlies with parentheses:\n\nIn recent code you often stumble upon this in selectors / mapStateToProps with Redux (or other application state management libraries, that have the same kind of needs).\nFavor functions and return\nTo wrap up, remember that this type of code is often hard to read and may leave an unpleasant aftertaste in your mouth… For most cases, you’ll be better off defining small helper functions for nested traversals, and resort to trusty ol’ return for short-circuiting. The labeled break example from above would be better written like so:\n\nLabelled nested loops may be preferred for raw performance reasons, and even then, only after having deeply profiled code to check that perf was indeed an issue and the refactored code brings significant gains. It’s pretty rare, nowadays. We’re not always coding a 3D engine that needs to guarantee 60FPS in Full HD, or working with nanosecond-obsessed devs like those of Lodash, you know…\n",
		"description": "Sometimes the best way to short-circuit nested loops is statement labels!",
		"date": 1588982400,
		"image": "/assets/images/art-vid/js-nugget-6.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Short-circuiting nested loops",
		"url": "https://delicious-insights.com/en/posts/js-labels/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "Here comes the seventh article of our daily series “19 nuggets of vanilla JS.” Are you still going through a temporary variable when inverting two values? You’re Doing It Wrong!™\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nStrings and Unicode\nShort-circuiting nested loops\nInverting two values with destructuring (this post)\nEasily stripping “blank values” from an array\nLong live numeric separators!\n…and beyond! (fear not, all 19 are scheduled already)…\n\nYe Olde Way\nNumerous algorithms require inverting two variables. You’ll need that in hash computations, sorts, math sequences, and more. Long story short, we’ve needed this ever since programming was a thing (which, if only in its modern form, has been roughly 70 years). But the first widespread programming languages had no specific mechanism for this, so we’ve needed to go through a temporary variable.\nLet’s say we need to invert the values of A and B. The three-step dance is always the same, and considering we’re only talking about assignments, the syntax will be identical in most widespread languages:\n\nWe “backup” the initial value of A in, say, TMP.\nWe copy the value of B in A.\nWe copy A’s former value (backed up in TMP), in B.\n\nThis boils down to some kickass, prize-worthy code:\n\nNot that exciting, I’m afraid.\nA quick reminder on destructuring\nThe term destructuring generally refers to obtaining multiple data at once from some piece of structured data.\nSince ES2015, JavaScript has offered two kinds of destructuring:\n\nPositional (“array”) destructuring is based, as its name implies, on the positions of values in the source data, which can be anything iterable, and most often is an array. That type of destructuring is surrounded by square brackets ([…]).\nNamed (“object”) destructuring relies on the names of properties in the source data, which can be any object. The order thus does not matter. This type of destructuring is surrounded by curly braces ({…}).\n\nYou can’t destructure null or undefined, but anything else is fair game.\nFinally, you can put a destructuring anywhere there is an assignment, be it…\n\nexplicit (assignment operator =, for instance in a declaration), or\nimplicit (parameters in a function signature, or within another destructuring, as you can indeed nest them).\n\nExamples of positional destructuring:\n\nExampled of named destructuring:\n\nWhich means that…\nAlright, let’s invert two variables a and b without breaking a sweat:\n\nVoilà.\nI’d like to say “it doesn’t get any shorter than this,” but some languages have advanced-enough parsers that they can avoid ambiguities and allow destructs without delimiters (e.g. Ruby would go: a, b = b, a 😍). But hey, no more temporary variable, and the intent is immediately clearer!\nBy the way, this is not at all limited to two values, you know! Let’s say you want to rotate values across a triplet. No worries:\n\nBig mood 🤗\nWant to dive deeper?\nOur trainings are amazeballs, be they in-room or remote online, multi-client or in-house just for your company!\n",
		"description": "If you’re still going through a temporary variable to invert two others, You’re Doing It Wrong™",
		"date": 1589068800,
		"image": "/assets/images/art-vid/js-nugget-7.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Inverting two values with destructuring",
		"url": "https://delicious-insights.com/en/posts/js-invert-by-destructuring/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Welcome to the eighth article of our daily series “19 nuggets of vanilla JS.”. Need to clean up an array? We’ve got a lot of solutions, and some are… super concise!\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nShort-circuiting nested loops\nInverting two values with destructuring\nEasily stripping “blank values” from an array\nLong live numeric separators!\nProperly sorting texts\n…and beyond! (fear not, all 19 are scheduled already)…\n\nA few reminders on filter\nSince ES5 (2009), the Array type has sported many iterative methods directly inspired by Prototype.js’ Enumerable module:\n\nforEach to invoke a callback on each element (you should now favor for…of);\nmap to produce a derived array via a transform function;\nevery and some to determine whether a predicate (a yes/no function) is passed by all or some items of the array;\nreduce and reduceRight to produce a single consolidated value by traversing all values in the array (e.g. a sum, a concatenation, a hash);\nFinally, filter to produce a derived array that only retains values that satisfied a predicate.\n\nWith the exception of reduce and reduceRight that accept 3, all other methods accept 2 arguments:\n\nThe callback function (for filter, that would be the predicate);\nThe context: if the callback function expects a specific this but passing it by reference would “lose it” (which means it was declared using function or the shorthand method syntax), this optional argument lets us specify the correct context (which is super useful and performant).\n\n(By the way, these callbacks get not one but three arguments: the value, its index, and the entire array. This can come in super handy…)\nHere are a few examples:\n\n“Blank values”?\n“Stripping blank values,” meaning what, exactly? What is a “blank” value? Well, this will largely depend on your actual needs.\nMost often, you’ll have an array of numbers (where zero is deemed invalid) or strings (where empty strings are deemed invalid). In both cases, false, null, NaN and undefined would be regarded as invalid too.\nThis is the super-easy case we’ll see later. But what if your situation is different? Perhaps you want zeroes? Or false? Or you want to remove whitespace-only strings?\nNo worries, I’ve got a series of cute handy predicates for you.\nA bunch of useful predicates\n\n\n\nValue(s) to test for\nPredicate\n\n\n\n\nundefined\nx === undefined or typeof x === 'undefined'\n\n\nnull or undefined\nx == null (note the loose equality)\n\n\nempty string ('')\nUse as boolean, or x === ''\n\n\nwhitespace-only string\nx.trim() as boolean, or x.trim() === ''\n\n\nText convertible to number (except NaN)\nNumber\n\n\n“Usable” number (neither NaN nor infinite)\nNumber.isFinite (ES2015)\n\n\nInteger\nNumber.isInteger (ES2015)\n\n\nReliable integer1\nNumber.isSafeInteger (ES2015)\n\n\n\n1 Every Number in JS relies on the IEEE 754 standard; these are floating-point 64-bit numbers with a maximum of 15 digits of precision. Beyond a certain limit, integers are rounded to their closest representation. Try evaluating 9999999999999999 in your console or Node REPL, and then even 1234567890123456789…\nIf you’re processing texts, but want to consider falsy values (e.g. null, undefined, false or NaN) as empty strings, you can go with String(x || '') (because String(undefined) would result in 'undefined', among other things).\nIf you regard whitespace-only strings as empty too: String(x || '').trim() is your friend. Examples:\n\nNeed to reject instead of retaining?\nSometimes you’ve got a predicate available, that might contain some fairly complex code, and tough luck: it tests the exact opposite of what you want. It says “yes” when you want to reject and “no” when you want to retain. Alas, you only have filter around, there is no reject method.\nPlease don’t go rewriting that predicate yourself trying to invert its logic (if the predicate is complex, this can quickly break down). The easiest way is to use a negator:\n\nIf you do this a lot, you can make the negation generic:\n\nAs you might expect, Lodash provides this: _.negate…\nThe super-easy case: Boolean!\nWe mentioned earlier the rather widespread case of an array of numbers (where zeroes are invalid) or strings (where empty strings are invalid). In both cases, false, null, NaN and undefined would also be regarded as invalid.\nWhen you want to filter out all falsy values, there’s a super-concise way:\n\nIt’s sort of like the data.compact you’d find in other languages (e.g. Ruby): it will strip out null and undefined, which is rather common base case, but also take out false, NaN and '', which is a rather nice bonus. OK, this will also remove 0, so be careful if that’s an issue for you. But I do use this all the time to clean up, for instance, an array where every result item must be a non-empty text or some random object.\n",
		"description": "Need to clean up an array? We’ve got a lot of solutions, and some are… super concise!",
		"date": 1589155200,
		"image": "/assets/images/art-vid/js-nugget-8.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Easily stripping “blank values” from an array",
		"url": "https://delicious-insights.com/en/posts/js-strip-blank-values/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "Time for the ninth article of our daily series “19 nuggets of vanilla JS.” Today we’re covering a syntax extension for numeric separators that finally brings to JavaScript a comfort item that many other languages have had: the visual splitting of digits to more easily see the components or scale of a number.\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nInverting two values with destructuring\nEasily stripping “blank values” from an array\nLong live numeric separators! (this post)\nProperly sorting texts\nExtracting emojis from a text\n…and beyond! (fear not, all 19 are scheduled already)…\n\nThe problem\nPresented without comment:\n\nThe solution\nMany languages have adopted the _ (underscore) numeric separator in their numeric literals. Do note that the separator has zero impact on the final value, it’s just there to improve the readability of the source code, with roughly the same common-sense constraints other languages provide:\n\nForbidden at the beginning or end of the literal (let n = _10_: that’s a big nope!)\nNot at the end of the integer part or beginning of the decimal or exponent part of a floating-point literal (let n = 5_._0e_10: that’s three strikes!)\nNo adjacent separators (const FEE = 15__00: nope!)\n\n\nThis also works on BigInt literals (a type introduced in ES2020), for which they’re especially useful.\n\nBeware…\nIn order Not To Break The Internet™, the Number, parseInt and parseFloat functions do not understand these separators, thereby avoiding a breaking change, which would be a show-stopper for JavaScript.\nRight now?!\nThis became stage 4 in July 2020 and is therefore part of ES2021. But besides Babel transpiling it (you bet, that’s simple), it’s been native since Firefox 70, Chrome 75, Edge 79, Safari 13, Opera 62 and Node 12.5. You also find it in TypeScript.\nSo I’d say yes, right now 😚\n",
		"description": "Tired of having to manually count the digits to get the scale of a number?  Annoyed by having to manually split to isolate its components?  Numeric separators are finally here!",
		"date": 1589241600,
		"image": "/assets/images/art-vid/js-nugget-9.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Long live numeric separators!",
		"url": "https://delicious-insights.com/en/posts/js-numeric-separators/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "We’re already at the tenth post of our daily series “19 nuggets of vanilla JS.” Today we look at a recurring theme: sorting arrays in a smart (and clean) manner, despite sometimes-complex formatting. And yet, we seldom need to query the server or use a library: the JS standard library provides some outstanding capabilities!\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nEasily stripping “blank values” from an array\nLong live numeric separators!\nProperly sorting texts (this post)\nExtracting emojis from a text\nProperly defining optional named parameters\n…and beyond! (fear not, all 19 are scheduled already)…\n\nArray#sort(…)\nIt’s a classic: the sort(…) method on Array. I’m sure you used it already, it’s easy:\n\nWe could do that in our sleep…\nWatch out for these mutants!\nThe first pitfall is that sort(…) is mutative: it modifies the array in-place, instead of generating a freshly derived one. True, this is not the only mutative method on Array: you’ll also find copyWithin, fill, pop, push, reverse, shift, splice and unshift. But it remains a minority (9 methods out of 32), and when you’re used to immutable behavior (original array untouched), this can bite:\n\nIt’s done that way for performance reasons: most of the time, you do want to sort the original array. If that trips you up, you can always clone it first:\n\nSorting numbers\nLook at this mess:\n\n\nHe he, that old trap. As JavaScript uses dynamic typing, classical arrays (the Array type, as opposed to numeric typed arrays) can end up holding values of multiple types, for instance String, Number, etc.\nAs a result, to sort it all, we need a way to represent values in an interoperable way. And the only type all other types can converge to is String (which is why all objects have a toString() method).\nThis is what sort() defaults to: it converts all values to String before comparing them, using a good old &lt; operator, of all things.\nNot cool.\nSo what’s a developer to do? We need to provide our own comparator, my friend! sort(…) accepts an optional argument that is a comparison function:\n\nIt will receive two arguments: two values from the array to compare\nIt returns a negative number if the first argument comes earlier, a positive one if the first argument comes later, and a neutral one (zero) if they are deemed equivalent.\n\nIf you have a Java background, it’s much like doing an anonymous implementation of the java.util.Comparator interface. But less annoyingverbose.\nFor ascending sorts, we can simply return the difference between the two numbers: if a is smaller, the result will be negative, so a will be sorted earlier. For a descending sort, we use b - a instead, which negates the sign!\n\nLovely.\nAdvanced text sorting\nIt’s not just numbers that can trip you up. By default, comparison between strings is lexicographical: it uses the character table’s ordering (so the Unicode codepoints). Except this is not a natural order at all, folks:\n\nWTF?! I highly doubt any of your users would be happy with that trashunexpected result. Linguistical ordering usually follows a set of particular rules about diacritics (e.g. acute accents, cedillas), case (upper / lower), punctuation, and more. None of that here.\nThis corpus of rules, that can vary drastically from one locale to the next, is usually referred to as collation. You might have seen that in SQL, when defining tables and columns or writing a query, so that your order by on textual fields yields something reasonable. It’s a fairly universal data processing concept.\nWell guess what? str1 &lt; str2 doesn’t give a hoot about collation.\nRespecting the locale\nLet’s go with something better.\nString#localeCompare(…)\nAt a minimum, we can use the basic version of localeCompare(…), a String method that’s been around since ES3 (1999). It compares two strings by following the rules of the active locale, and returns -1, 0 or 1. Even when that’s all you’ve got and you can’t use an explicit locale, it’s miles better than &lt;, especially since a majority of locales tend to converge about their sorting rules:\n\nThat is already awesome, but when our JS engine supports ECMA-402, the standard for the Intl API (that is an integral part of JavaScript’s standard library), this method becomes much more powerful as it becomes a shortcut for features made available by Intl.\nIn practice, we’ve had that since IE11, Firefox 60, Chrome 74, Edge 15, Safari 10, Opera 61 and Node 0.12! Quite enough…\nWe mentioned already in nugget #3 that it gave superpowers to Number#toLocaleString(…), by putting it on top of Intl.NumberFormat. This happens here too: String#localeCompare(…) becomes a wrapper around Intl.Collator.\nIntl.Collator\nThis object deals with, you guessed it, collation. Just as with Intl.NumberFormat, we provide a locale and any option we’d need. And what options these are! So many cool tricks. Let’s talk about my two favorite ones.\nFirst, numeric can be set to true to sort numerically the segments that are… numeric (cough) in our te",
		"description": "Sorting complex data arrays in JS is often Mission Impossible… yet native capabilities are amazing!",
		"date": 1589328000,
		"image": "/assets/images/art-vid/js-nugget-10.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Properly sorting texts",
		"url": "https://delicious-insights.com/en/posts/js-array-sorting/",
		"locale": "en",
		"readingTime": "5 min"
	},	{
		"content": "Here’s the eleventh post of our daily series “19 nuggets of vanilla JS.” And today we’re talking emojis. They’re everywhere, but it’s hard to identify, extract and collect them from a string. They are an ever-expanding list and, in JavaScript Strings, are always encoded as surrogate pairs because of their higher-range codepoints… Fortunately, an ES2018 feature makes it easier for us!\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nLong live numeric separators!\nProperly sorting texts\nExtracting emojis from a text (this post)\nProperly defining optional named parameters\nconst is the new var\n…and beyond! (fear not, all 19 are scheduled already)…\n\nEmojis, Unicode and surrogate pairs\nIn our #5 “nugget” post, “Strings and Unicode in JavaScript,” we discussed already how Unicode is handled by the String type. In particular, we saw that text was encoded as UTF-16, with 2-byte code units, which requires a combination of 2 individually-invalid code units for high-enough codepoints, something called a surrogate pair.\nThis is the common scenario for emojis, as pretty much all of them have codepoints in the U+1Fxxx range, plus numerous modifiers going all the way to the U+Exxxx range.\nSuch a diversity implies that it’s rather tedious and error-prone to “manually” identify emojis in a String. The “traditional” regex to achieve this would be rather intense (and would likely perform a bit poorly)…\nThe Unicode flag for regexes\nES2015 introduced a u flag on regexes, that triggers Unicode handling.\nBefore ES2018, this “only” allowed using the codepoint literal syntax (i.e. \\u{xxxxx}) in addition to the legacy code unit literal syntax (\\uXXXX). But since ES0218, this also lets us describe positive or negative matches with Unicode properties.\nUnicode properties\nThe Unicode standard assigns each codepoint a series of properties. These are cross-cutting categories, so to speak. As an example, consider the U+2778 glyph: ❸ (fondly known as Dingbat Negative Circled Digit Three). Some of its properties are:\n\nScript: Common / Zyyy(here’s a [list from the ES spec](https://tc39.es/ecma262/#table-unicode-script-values) and [another from the excellent Compart site](https://www.compart.com/en/unicode/scripts))\n\nGeneral Category: Other_Number / No(here’s the [ES spec’s list](https://tc39.es/ecma262/#table-unicode-general-category-values) and the [Compart list](https://www.compart.com/en/unicode/category)).\nTransitively, it’s also part of the more generic `Number` / `N` category.\n\n\nBy the way, most property values have a long form (e.g. Other_Number) and a shorthand (e.g. No). As always, do favor the longer (more legible) version to make your code a bit easier to understand and maintain…\nUnicode Property Escapes to the rescue!\nES2018 brings a new syntax for regexes that lets us match Unicode properties: Unicode Property Escapes. It reads \\p{…}. As is usual for escape sequences in regexes, the positive variant is lowercase, and the negative variant is uppercase (\\P{…}). Just like \\s says “whitespace” and \\S says “anything but whitespace.”\nProperties can be binary (yes/no; list from the ES spec) or more general (anything else, such as General_Category or Script). For binary properties, their name alone is enough; for others, you’ll need to provide a value.\nMany script or category values can be used directly as “binary properties” in the syntax. For instance, you can indifferently write \\p{Emoji} or \\p{Script=Emoji}. Some useful “pseudo-binaries” include Alphabetic, Uppercase, Lowercase, Number (especially Decimal_Number), Diacritic, Emoji, White_Space (that covers many less-usual codepoints, unlike the legacy \\w)…\nSo here’s our solution for extracting any sequence of emojis from a text!\n\nThis can be super cool for lots of other needs, as you might expect:\n\n(Yup, be they ASCII, mathematic double-struck, Arabic-Indic or Kannada, these are still decimal digits…)\nThe MDN docs are, as always, excellent.\nBonus: the singleline/dotall flag\nOne of many regex syntax improvements that came out with ES2018 is the long-awaited s flag, for “single line” (also known as “dotall”), that extends the any character class (.) (single period) to also match line breaks and carriage returns:\n\nBefore that, we had to resort to rather puzzling hacks (instead of .), such as [^] (a class meaning “everything except nothing”) or [\\s\\S] (that said “all whitespaces and all non-whitespaces”), which made the intent rather unclear…\n",
		"description": "Emojis are everywhere, but it’s hard to identify, extract and collect them from a string.  They’re on the rise and always use surrogate pairs…  How can we be quick and clean about this?",
		"date": 1589414400,
		"image": "/assets/images/art-vid/js-nugget-11.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Extracting emojis from a text",
		"url": "https://delicious-insights.com/en/posts/js-extract-emojis/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Welcome to the twelfth post of our daily series “19 nuggets of vanilla JS.” Have you heard of “named parameters,” also known as “keyword parameters?” They’re very handy but don’t exist per se in JavaScript. Fortunately, we’ve had ways to tackle this for a long time, that have become even easier with ES2015…\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nProperly sorting texts\nExtracting emojis from a text\nProperly defining optional named parameters (this post)\nconst is the new var\nUsing named captures\n…and beyond! (fear not, all 19 are scheduled already)…\n\nNo “actual” named parameters…\nMany languages feature something commonly referred to as “keyword parameters” (or named parameters), that provide much-improved ergonomics as they let us:\n\nname the arguments we pass, clarifying the call site;\nonly pass the arguments we need;\nnot rely on a specific argument order when writing the call.\n\nHere’s an example in Ruby:\n\nYou can find these in Kotlin, Python, Swift… In C#, any argument can be named at call time (as long as there are no anonymous arguments following it). Long story short: it’s a common feature.\nJavaScript doesn’t really have that. When calling a function, we simply pass arguments in the same order as parameters were defined: the pairing is implicit and position-based.\nEven when not falling into signatures from Hell, this can quickly devolve into quite unreadable stuff:\n\nA rule of thumb states that when you have many consecutive arguments of the same type with no domain-natural ordering, or when you exceed 3 arguments (even intuitive ones), you should name your parameters. OK then, but how do we achieve that in JS?!\nThe options hash, a time-honored solution\nBefore Ruby formally had keyword parameters, it cheated by using a final Hash-type argument, as the language’s syntax then let us skip the curly braces of the literal Hash at call time. In JavaScript, we’ll use an options hash too (an object literal, to be blunt), except curlies are required.\nWe’ve done this for a long time. After all, jQuery.ajax(…) sports 35 options, can you imagine how ugly calls would get if they were provided positionally?\nWhen we did that the old way, this was quite cumbersome:\n\nBesides producing verbose code, the signature lacks any useful information; if we didn’t have detailed docs (or a well-crafted type definition), we’d be screwed and would have to rummage in the source code, looking for clues. Plus, said code starts with “noise” instead of operational code.\nNamed destructuring of the argument\nThe advent of named destructuring in ES2015 certainly helped make things a bit clearer:\n\nAt least the signature was a bit more descriptive (and we’ll get better autocompletion, etc.). Even without a detailed type definition, most EDIs and decent editors would provide completion on option names.\nDefault values\nES2015 also brings default values to the table. These are used if and only if (“iff”) the origin data is undefined, which is usually cool, but not always what we need. For instance, if you regard null, 0 or false as invalid, you’ll need to manually massage these arguments some more. But still, that’s pretty cool:\n\nAgain, this information will surface in autocompletion when writing the call, which always comes in handy.\nWhat if I don’t want to pass anything?\nA common pitfall happens when your function should, in practice, allow a call with no arguments. This would happen if all your named options feature a default value (or for those that don’t, undefined is considered acceptable). You’d then like to allow empty calls:\n\nThe issue here is that the signature destructures its argument, but you can’t destructure null or undefined, hence the TypeError.\nDo note that if even one of your arguments is mandatory (e.g. timeout), on the one hand you won’t put a default value to it, and on the other hand, that type of crash on an empty call likely becomes legit.\nBut how could we allow an empty call? We only need to provide a default value for the argument itself:\n\nHere, as no object was passed for the argument, it will default to the empty object, and since that doesn’t have any of the properties our options need, their individual default values will be used.\nWhy put the default values in the destructuring?\nI sometimes get asked why we should put the default values inside the destructuring, instead of on the higher-level default value for the whole argument, like so:\n\nBesides repeating option names, this spells trouble when you do pass some of the options in: your (partial) options object will replace the default object. The destructuring will not provide default values for missing options.\n\nSo always put your options’ default values inside the destructuring. Plus, it’s shorter.\n",
		"description": "JavaScript doesn’t have keyword parameters? No worries! Named destructuring offers a way out… but beware of edge cases.",
		"date": 1589500800,
		"image": "/assets/images/art-vid/js-nugget-12.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Properly defining optional named parameters",
		"url": "https://delicious-insights.com/en/posts/js-optional-named-parameters/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "In this thirteenth installment of our daily series “19 nuggets of vanilla JS,” we broach a controversial topic: the respective roles of the three generic declarative keywords: var, let and const (as there is no question about their two friends, function and class).\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nExtracting emojis from a text\nProperly defining optional named parameters\nconst is the new var (this post)\nUsing named captures\nObject spread vs. Object.assign\n…and beyond! (fear not, all 19 are scheduled already)…\n\nBefore / after ES2015\nBefore ES2015, JavaScript only had two declarative keywords: var and function. Besides, the unit of scope wasn’t the block (as delimited by a pair of curly braces), unlike many other languages, but the function (or lacking one, the global scope).\nThis can catch you off-guard when coming from other languages; and it is never good for a programming language to behave in surprising ways: this is an infinite source of bugs.\nAs a result, ES2015’s three new declarative keywords (let, const and class) all use the current block as their scope, which is way more aligned with the dominant paradigm in programming. Their identifiers only exist between the declaration line and the end of the block (even implicit blocks, like curly-less loops).\nThe hoisting of var: an antipattern\nIt also happens that the two historical keywords are hoisted: this means that the JavaScript interpreter will act as if their declarations (but not their initializations, if any) happened at the top of the scope, regardless of their actual source position. To be more precise, var declarations get hoisted first, then function ones.\nIt was conceived as an attempt at preemptive optimization in the very first implementation of the engine, and doesn’t belong in 2020, or 2015, or even 2009. It is actually sneaky. Check this out:\n\nThis is as counter-intuitive as it gets. By rights, that code shouldn’t even be syntactically valid. Failing that, it should at least throw a ReferenceError on the first line of the function. But it doesn’t! Because of hoisting, the code that is actually run looks like this:\n\nThere is zero benefit to hoisting variables. None. Zero. Zilch. In fact, the consensus today is that var doesn’t belong in modern (ES2015+) JS code.\nAt a minimum, it should therefore be replaced by let, that has two benefits:\n\nIt is not hoisted, so any attempt at referencing it ahead of its declaration will throw a ReferenceError, as you would expect.\nIts scope is the current block, which again aligns better with common expectations.\n\nBut then, why is this post about const instead of let?\nReassigment is a rare beast\nThere is exactly one difference between let and const: the latter doesn’t allow reassigning. A const declaration must be initialized on-the-fly, and cannot be reassigned (e.g. with =) later.\nWhy prefer const?\nIn practice, the overwhelming majority of declarations are never reassigned; the reason is simple: altering the semantics of an identifier along the code creates confusion (reducing maintainability) and fosters bugs. Sometimes a reassignment is perfectly justified (e.g. async initialization, progressive refinement of a piece of data) but this remains a tiny minority case.\nOn the other hand, it often happens that we reassign by mistake. Why? Mostly due to sloppy copy-pastes or a clumsy code completion.\n\nBy going with const by default, we’re immediately held back by the collar when trying to reassign an identifier by mistake. If ESLInt is properly configured (see later), it will spot this immediately (within seconds of typing the code, or at commit time with relevant hooks set up). At worst, we’ll get a runtime error (if our JS runtime understands const anyway).\n\nThere is this tendency to stick with let in numerical for loops, but numerical for loops should be super-rare now that we have the excellent for…of loop (spoiler alert: a later nugget will cover this in-depth). No reassignment (such as i++)? No need for let!\n\nBeware of parameters\nIn a function, signature parameters are not declared with let or const , so stay sharp.\nconst ≠ immuable\nAnother critical point is: not reassignable does not mean immutable. In particular, if the identifier references an object, it’s still perfectly possible to alter the contents of it. You just can’t reassign the identifier.\n\nIf you wish to “freeze” an object, there are multiple levels you can go for; at top-level, the standard library ever since ES5 features Object.preventExtensions(), Object.seal() and Object.freeze(), in increasing locking order. For recursive freezing, you’ll need to look for utility libraries such as deep-freeze-strict. But you can do it.\n(On a side note, if you follow functional programming principles of immutability, you should not even need to do any of this.)\nESLint rules\nOur beloved ESLint features several rules on this topic:\n\nno-var barks on any use of var\nprefer-const spots any let or var declaration that is never rea",
		"description": "Should you still use “var”?  Should we just replace it with “let”? What about “const” anyway?  In this post, we’ll explain why “const” should be your go-to declarative keyword.",
		"date": 1589587200,
		"image": "/assets/images/art-vid/js-nugget-13.jpg",
    "_tags": ["js","tutoriel"],
		"title": "const is the new var",
		"url": "https://delicious-insights.com/en/posts/const-is-the-new-var/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "This is already the fourteenth installment of our daily series “19 nuggets of vanilla JS,” and we’re talking again about regular expressions to shed some light on one of the nicer regex novelties in ES0218: named capturing groups, also known as “named captures”.\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nProperly defining optional named parameters\nconst is the new var\nUsing named captures (this post)\nObject spread vs. Object.assign\nConverting an object to Map and vice-versa\n…and beyond! (fear not, all 19 are scheduled already)…\n\nA quick recap on groups\nIn a regular expression, we use groups to apply quantifiers or alternatives to more than one character.\nLet’s say we want to express “the letter ‘b’ at least once”: we would write b+. But to say “the text ‘ba’ at least once”, we can’t just go with ba+: that pattern would mean “the letter ‘b’, followed by at least one letter ‘a’”. We thus create a group around that text, on which the quantifier applies: (ba)+.\nIn the same way, baba|bébé means “‘baba’ or ‘bébé’”, but to say “‘hi’, followed by either ‘baba’ or ‘bébé’, followed by ‘!’” we would have to write hi (baba|bébé)! to restrict the scope of the alternative: without the group, it would mean “‘hi baba’ or ‘bébé!’”.\nCapturing groups\nBy default, groups are capturing: the part of the scanned text that ends up matching them is isolated in a captured group with an index. Group zero is always there: it contains the expression’s entire match. Groups starting at one (1) are the captured groups. As a result, if you look into a match result (an extended Array object returned by match or exec) produced by the expression in the code below, you’ll find, among other things, properties 1, 2 and 3 holding the three captured groups.\n\nCapturing groups are also handy for backrefs (back references): these let us express that our pattern should contain “the same source text as the one that matched an earlier spot in our expression.” Let’s say we want to match an HTML attribute, the value of which can be surrounded by single or double quotes (' or &quot;). The critical thing is, we need the same delimiter on both sides. We can use a backref with the proper captured group index for this:\n\nIn the regex above, the delimiter pattern (['&quot;]) is in the first capturing group: to use a backref on it, we will therefore type \\1.\nIt follows that when using a regex with the String#replace API and providing a text-based replacement pattern, we can reference captured groups with the $index notation. Look at this:\n\nNon-capturing groups\nThese group indices quickly get out of hand, though. The moment we add a group somewhere, it offsets all the later indices! Say we want to allow a phone number to be prefixed with “tel:”, it offsets everything else:\n\nFor all we know, we may not even care whether the “tel:” protocol is there or not, we just wanted it to be part of the pattern matching! We didn’t mean to wreck havoc on our captured group indexing. In such scenarios, we can use non-capturing groups by starting them with (?: instead of just ( :\n\nGroup specialization\nAs a general rule, any capturing group specialization starts with (?:\n\n(?: for non-capturing groups,\n(?= for lookaheads,\n(?! for negative lookaheads,\n(?&lt;= for lookbehinds,\n(?&lt;! for negative lookbehinds.\n\nNamed capturing groups\nMany languages feature a better way to capture groups: by naming them. It is more readable and more resilient to changes in the expression: no surprise reference shifting as with the indices.\nES2018 finally provides this! The API changes for this are many:\n\nYou can define a named capturing group with (?&lt;name&gt;expr) (so between angular brackets, before the pattern).\nYou can do a backref with \\k&lt;name&gt;.\nThe match result features a groups property, that becomes an object whose properties use the capturing groups’ names.\nThe textual replacement pattern in String#replace allows $&lt;name&gt; for referencing named captured groups.\n\nAs a side note, these groups also have indices but We Just Don’t Care™.\nRevisiting our previous examples as named captures:\n\nI mean, just 😍.\nWhere can I get that?!\nPretty much everywhere that matters: this is supported natively since Chrome 64, Firefox 78, Edge 79, Safari 11.1, Opera 51 and Node 10.\nOtherwise, Babel transpiles (including env and latest presets).\nBonus trick: String#matchAll(…)\nA long-time gripe with String#match (and its RegExp#exec counterpart) is that you could not quite have your cake and eat it too when using capturing groups:\n\nEither you used the g flag (for global, which returns all matches of the entire pattern) and you got an array of full-pattern matches, without their individual groups.\nOr you did not use the flag, and got either null or a match result, with individual captured groups.\n\nHere it is in all its (infamous) glory:\n\nSince ES2020 however, we finally get String#matchAll, that returns an iterator (even better than a dumb Array) on match results:\n\nThat’s 👏 Just 👏 Spi",
		"description": "ES2018 finally brings named capturing groups to regular expressions and boy is it cool!",
		"date": 1589673600,
		"image": "/assets/images/art-vid/js-nugget-14.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Using named captures",
		"url": "https://delicious-insights.com/en/posts/js-named-captures/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "Let’s get started with our fifteenth post in our daily series “19 nuggets of vanilla JS,” this time to clear things up on the roles and respective specificities of the object spread syntax that became official with ES2018, as opposed to the Object.assign(…) API formalized in ES2015. As you’ll see, these are not quite the same thing.\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nconst is the new var\nUsing named captures\nObject spread vs. Object.assign (this post)\nConverting an object to Map and vice-versa\nThe for-of loop: should there remain only one…\n…and beyond! (fear not, all 19 are scheduled already)…\n\nFrom extend(…) to assign(…)\nIn JavaScript, we’ve always needed to copy properties (and their values) from one object to another. Whether we’re trying to build an options hash, create a descriptor or something else, this is a common scenario.\nConsidering that it is rather tedious to do manually (for…in loop, hasOwnProperty(…) safeguard, and more), we quickly saw the emergence of utility functions from third-party libraries, and ultimately in the language’s standard library.\nThe story of Object.assign(…) goes way back already:\n\nIn 2004, Prototype.js popularizes Object.extend(…).\nSoon enough, jQuery 1.0 features jQuery.extend(…) (usually called as $.extend(…)), that generalizes the signature to allow for not just one source object, but as many sources as we’d like, which becomes the accepted signature for this kind of operation; jQuery will then extend (1.1.4) its semantics to offer deep merging in addition to the original, shallow merge.\nUnderscore and then Lodash expectedly feature _.extend(…), again with any number of sources (Lodash also offers the much older _.assign(…), which is closer to what ES2015 will standardize).\nIn 2015, the standard library for the language grows with ES2015 and features Object.assign(…), that copies all own enumerable properties from sources to the destination (with the good manners of ignoring undefined source arguments, making our calling code easier).\n\nHere are a few quick call examples:\n\nES2018 and the object spread\nWhen the React team designed the JSX syntax, it proposed an extension that quickly grew popular: the spread of props, that lets us use an object’s properties as a sort of dynamic bag of props:\n\nES2015 featured a spread on iterables (e.g. Array and String), but not on any plain object: we had to wait for ES2018 to get Rest/Spread Properties. The syntax can be used exclusively inside an object literal, thus in the creation of a new object. Just like the spread on iterables, we can use multiple object spreads in a literal. The order only matters when several spread properties share the same name: the last relevant spread wins.\n\nWith object spreads, it becomes easy to quickly derive an object from another, which makes it a particularly popular syntax when writing immutable reducers, as with Redux:\n\nA subtle difference\nYou might think that the two bits of code below are equivalent:\n\nIn this specific example, the two results will indeed end up the same. The target of Object.assign(…) is a new empty object of type Object, with no existing writer accessors.\nHowever, when code using Object.assign(…) writes to a more advanced object, perhaps one with writer accessors, things start to differ: the second version creates a new object of type Object, it doesn’t write in the original result. This means that not only might you change the underlying type of the object (result might have been an instance of a custom class of yours), but you’re killing any guarantees or built-in behaviors its writer accessors ensured.\n\nSo there you have it. It’s not quite the same thing, but that’s rarely an issue in practice. Just remember that an object spread always returns a new object, of type Object, and will therefore blissfully ignore the type and writer accessors of the original object(s).\nIncidentally, this means that when you must alter the original object, perhaps for object identity purposes, you can only go with assign(…).\nDepending on the specifics of your situation and need, you’ll either go with assign(…) or an object spread. Choose wisely.\nWhere can I get that?!\nAs for Object.assign(…), it’s been natively supported since Chrome 45, Firefox 34, Opera 32, Edge 12, Safari 9 and Node 4.\nObject spreads showed up more recently but a while ago still: in Chrome 60, Firefox 55, Opera 47, Edge 79, Safari 11.1 and Node 8.3.\nAs always, for older platforms core-js and polyfill.io provide the former, and Babel can transpile the latter.\n",
		"description": "With everyone boarding the “object spread” train, is there still a place for the Object.assign(…) API?  Are they even different?  Yes, and yes!",
		"date": 1589760000,
		"image": "/assets/images/art-vid/js-nugget-15.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Object spread vs. Object.assign",
		"url": "https://delicious-insights.com/en/posts/js-object-spread-assign/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "We’re already at the sixteenth post in our daily series “19 nuggets of vanilla JS.” Today is about one of the new collection types that appeared in ES2015: Map. When should you use it instead of a plain object, and how to easily switch between these two representations?\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nUsing named captures\nObject spread vs. Object.assign\nConverting an object to Map and vice-versa (this post)\nThe for-of loop: should there remain only one…\nSimulating an abstract class with new.target\nSurprise!\n\nObject, an easy dictionary\nJavaScript has always used good ol’ objects to pair keys and values (and starting with JS 1.1, object literals made it much more concise):\n\nAs new versions came out (ES3, ES5, ES2015…), many static APIs appeared to introspect objects:\n\nAwesome! But then, why did we need Map?\nMap: what benefits does it bring?\nUsing plain objects for our dictionaries is indeed convenient and concise, but does suffer from pretty stark limitations:\n\nKeys have to be of type String (or, since ES2015, Symbol): we can’t use a custom object of ours, or a host object (e.g. a DOM node or a fetch request) as key.\nThere is confusion between inherited and own properties (not to mention enumerability). In practice, as we often use plain objects, we only inherit a few things from Object (toString, valueOf, hasOwnProperty and a few more), with names that bear a rather low collision risk. But still, as an extra precaution, we should either start from an Object.create(null) instead of a {}, or always access through appropriate APIs (such as Object.getOwnPropertyNames(…), hasOwnProperty(…), etc.).\nNo easy way to clear the dictionary. No Object.clear() or some such: if you want to retain container identity but clear it out, you’re in for some tedious code.\nIteration order is not guaranteed. Even if, in practice, the order used by for…in, Object.keys() and friends is usually the chronological order of addition, the spec doesn’t mandate it and variations do exist. ES2020 added some clarity, but still.\nNot iterable by default. A plain object is not iterable (in the ES2015 sense) by default, meaning you can’t immediately use it with spreads, positional destructuring, the for…of loop or any other means of consuming an iterable (e.g. parts of the standard library).\nPerformance suffers in mutation-heavy scenarios. To properly optimize the indexing of the object (the access to its properties), JS engines need the object’s “shape” to remain stable: most of the time, changing that shape by adding or removing properties invalidates optimized lookup caches. So if you find yourself adding or removing properties a lot in your dictionary, performance is going to suffer.\n\nThis is why the standard library added Map with ES2015. At the cost of having to use a more explicit, slightly more verbose API (and having to convert for JSON (de)serialization), you get a number of benefits:\n\nKeys can be anything (even undefined is an acceptable key)\nOptimal performance\nIterable by default\nRicher API (including clearing)\n\n\nHow can we switch between the two?\nYou might need, from time to time, to turn a Map into a plain object. Perhaps you want to serialize it as JSON before sending it over the wire or persisting it on disk (and reciprocally, you’d like to turn it back into a Map after fetching it or reading it from disk).\nIf you Map has only String or Symbol keys, this is a one-liner:\n\nES2018 finally brings Object.fromEntries(…), the inverse operation of Object.entries(…) that came with ES2017 (a long-awaited extension there too), which was already a neat addition to ES5’s Object.keys(…) (from 2009).\nThis is easy to polyfill (through the usual means: core-js, polyfill.io, etc.) so even IE9+ could use it (and you can polyfill Map too).\nIf you absolutely must limit yourself to ES2015 without polyfills (but why? Are you that masochistic?), you can emulate that with a huge beast of an expression involving map.entries(), Array.from, reduce, property descriptors and Object.create. As academic literature is fond of saying, this is “left as an exercise for the reader”.\nWhere can I get it?!\n\nObject.entries(…) has been supported since Chrome 54, Edge 14, Firefox 47, Opera 41, Safari 10.1 and Node 7.\nMap has been native since Chrome 38, Edge 12, Firefox 36 (even Fx20 for what we need here!), Opera 25, Safari 8 and Node 4.\nObject.fromEntries(…) showed up in Chrome 73, Edge 79, Firefox 63, Opera 60, Safari 12.1 and Node 12.\n\nAgain, this is all easy to polyfill anyway.\nWant to dive deeper?\nOur trainings are amazeballs, be they in-room or remote online, multi-client or in-house just for your company!\n",
		"description": "Sometimes a plain object is enough.  Other times you’ll want a Map.  But why, and how, can you switch between one and the other?",
		"date": 1589846400,
		"image": "/assets/images/art-vid/js-nugget-16.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Converting an object to Map and vice-versa",
		"url": "https://delicious-insights.com/en/posts/js-object-map/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Welcome to the seventeenth installment in our daily series “19 nuggets of vanilla JS.” Today we talk about one of my favorite new (ES2015+) language features: the for…of loop. Still vastly underused or even poorly known, it advantageously replaces most numerical for loops and forEach calls, and offers many side benefits.\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nObject spread vs. Object.assign\nConverting an object to Map and vice-versa\nThe for…of loop: should there remain only one… (this post)\nSimulating an abstract class with new.target\nNegative array indices thanks to proxies\n\nOur story so far…\nHistorically, JavaScript had 4 loops:\n\nThe numerical for, coming over from C and widespread in other languages. Its syntax is completely obscure to beginners, but hey, it is what it is…\n\n\n\nThe for…in, that is nothing like similar constructs in other languages. It is specifically designed to iterate over enumerable properties of an object, regardless of whether they’re inherited (through the prototype chain) or own (object-specific) properties.\n\n\n\nThe while, again available across many, many languages. Instead of following a sequence, it relies on a condition evaluated at the beginning of every turn, which serves as a “keep going” indicator (so it is possible never to enter such a loop):\n\n\n\nThe do…while, which is similar but uses an end-of-turn condition, so you’ll run through the loop at least once.\n\n\nWith the exception of for…in, all these loops are found across a wide variety of programming languages.\nWhat does for…of do?\nES2015 formalizes the all-important notion of iterability. The iterable protocol is clearly defined (via the Symbol.iterator built-in — “well-known” — symbol), and many built-in objects are iterable: arrays obviously, but also strings, Maps, Sets, NodeLists… Not only that, but many objects offer multiple iterators beyond their default iterability.\nThe for…of loop is the only way to consume iterables that provides full control to your code: you can consume just as much as you need, with the quantity itself being dynamically defined by your algorithm.\nSay you want to consume a good ol’ array, focusing as usually on values, not indices; this becomes much more palatable:\n\nAs with any loop, you are free to leverage break, continue and return in there. It is, however, much more versatile than a numerical for: it works without indices / positions (e.g. on a Set), and you don’t need to remember caching the length for performance…\nA nice opportunity to use const\nDid you notice how we could prefer const when using for…of? That’s because we don’t need to reassign anything: we work directly with the value, not with an index we need to manually move forward. So we may as well declare it as const to avoid mishaps.\nOn-the-fly destructuring\nWhen the iterator you’re consuming emits value tuples, feel free to destructure on-the-fly. As an example, instead of doing this:\n\nFeel free to do this:\n\nKeep an eye out for extra iterators\nMany iterables offer additional iterators besides their default one. Most offer at least three conventional methods: keys(), values() and entries(). When the concept of a key is missing (as in Set), keys are… the values. You can also sometimes find more domain-specific iterators. Sky’s the limit! (Unless you’re working on, say, the tactile UI in SpaceX’s Crew Dragon. In which case, mad ~envy~respect.)\nSay we need to consume an Array using for…of, but need to get the indices too. This just needs the right iterator:\n\nFriendly to lazy evaluation\nA major benefit of for…of is this: since it doesn’t necessarily consume the whole iterable, but rather on an as-you-go basis, it is the main way to consume lazy-evaluated computations, including infinite iterables such as mathematical sequences, some crypto stuff, value generators, etc.\nSay we have a generator implementing the Fibonacci sequence:\n\nThis sequence never ends: should we try to Array.from(…) or spread it, we’d run out of memory (although the JS runtime would likely give us the ax before that). We can however get the first few terms by, say, positional destructuring:\n\nYet how could we browse it as-we-go, without knowing ahead of time how many terms we want? We need a loop so we can exit on-demand. Should we need to display all terms below 100, we could go like this:\n\nAll lazy-evaluation primitives (a pervading concept in functional programming) can be implemented that way, for instance the ahead-of-time consumption capping with take:\n\n(If you use RxJS or Ramda, this should look familiar…)\nPerformance\nYou can read every possible opinion online in terms of performance benchmarks for competing loop styles. Most are not very relevant / useful. What you need to keep in mind are two things:\n\nAs long as you’re not iterating across huge arrays (on the order of 1M+ items), the difference will be negligible.\nEven beyond that, if your loop is not transpiled (see further below about native support), these are pretty much equivale",
		"description": "Forget numerical for: the for…of loop that came out in ES2015 is your new best friend. Such versatility!",
		"date": 1589932800,
		"image": "/assets/images/art-vid/js-nugget-17.jpg",
    "_tags": ["js","tutoriel"],
		"title": "The for…of loop: should there remain only one…",
		"url": "https://delicious-insights.com/en/posts/js-for-of/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "This is nearing the end of our daily series “19 nuggets of vanilla JS.” Today we’ll look at the seldom-known new.target, that showed up with ES2015 and lets us simulate abstract classes, among other things…\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nConverting an object to Map and vice-versa\nThe for-of loop: should there remain only one…\nSimulating an abstract class with new.target (this post)\nNegative array indices thanks to proxies\n\nWait a minute — isn’t new an operator?!\nYes it is. Still, JS has a few syntactical oddballs, including new.target, that is legal (inside functions) and references the operand that was passed to new when instantiating the current object.\nConsequently, if the current function is not run in the context of an object (if there is no valid resolution of this), it will evaluate to undefined.\nFunction Environment Record\nEvery execution of a function spawns a function environment record (FER), that lists all the accessible bindings (associations of values to identifiers) made available by the call to that function. At any time, evaluating a reference traverses a series of active environment records, that roughly align with the relevant function scopes.\nThe FER for a traditional (non-arrow) function includes four specific bindings dynamically defined at call time: this, arguments, super and new.target. (An arrow function has none of these, its code will therefore resolve these references in the FER of the closest enclosing non-arrow function.)\nWhat is it for?\nA common scenario is about abstract classes. Quick reminder: in OOP, an abstract class acts as the starting point of a hierarchy of classes, but is incomplete in itself: you’re not supposed to instantiate it directly.\nLet’s illustrate this with the done-do-death hierarchy of geometrical shapes. Sure, they all have a few things in common, like an origin point and methods for drawing and computing the perimeter or area, that are justification enough for a common base class Shape. But a “shape” in itself does not tell us which specific shape to draw, so instantiating new Shape(…) wouldn’t make sense!\nJavaScript has no abstract keyword to express this, but we can simulate it by testing, in our constructor, that the operand passed to new wasn’t the base class (i.e. Shape) but rather a subclass (e.g. Square):\n\nWhere can I get that?!\nIt’s been natively supported for a while already: Chrome 43, Firefox 41, Opera 33, Edge 13, Safari 11 and Node 5.\nBabel and TypeScript transpile as always.\n",
		"description": "Have you heard about new.target?  Thanks to this unusual reference, you can easily implement abstract classes by forbidding some uses of the new operator…",
		"date": 1590019200,
		"image": "/assets/images/art-vid/js-nugget-18.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Simulating an abstract class with new.target",
		"url": "https://delicious-insights.com/en/posts/js-new-target/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "Even the best things have an end: here comes the last post of our daily series “19 nuggets of vanilla JS.” We wrap up with a bang by looking at a cool use of proxies, that amazing feature of ES2015: allowing negative indices on arrays.\n\nThe series of 19\nCheck out surrounding posts from the series:\n\nThe for-of loop: should there remain only one…\nSimulating an abstract class with new.target\nNegative array indices thanks to proxies (this post)\n\n“Proxy” ?!\nI know, I know, cool down. There’s no relation to network proxies. I know how stupidly-configured corporate proxies can be a traumatizing experience, and you have my sympathy.\nA proxy is, by definition, an intermediary. ES proxies are exactly that: objects that intercept every possible interaction with another object, and decide on a case-by-case basis whether to let it through, alter it, forbid it…\nThere is a critical point: a proxy never alters the original object: it is a wrapper of that object, which doesn’t prevent your code from using the original one directly if it holds a reference to it. The idea is that you can pass to external code, when you need it, only the reference to the proxy.\nArrays and negative indices\nAs a reminder, negative arrays start from the end: -1 is the last element, -2 the one before that, etc. Super handy.\nThe API for Array allows negative indices:\n\nslice(from, to) allows negative values.\nsplice(from, count[, ...items]) allows a negative from.\n\nUnfortunately, the general semantics of the indirect indexing operator, […], mandates that the property whose name is evaluated between the square brackets exists with that name. And numerical properties of arrays are not negative.\n\nThis is sorely needed, wouldn’t you say? Soon we’ll get .at(…) (on all iterables, too), but still, not as cool!\nSo let’s add them. 😎\nIt’s a trap!\nA proxy is defined based on two things:\n\nA target: the original object that we’re about to wrap.\nA handler, that is a plain object featuring predefined methods, called traps. An empty handler will not alter any behavior, making the proxy superfluous.\n\nThe language defines one trap per possible interaction with an object. Among others, we have has intercepting the in operator for testing the existence of a property, or apply intercepting, on function objects, the act of calling them (with the (…) operator).\nThe general syntax goes like this:\n\nWhat we’re interested in are the get and set traps, that intercept reading and writing properties. We won’t go as far as ensuring full consistency through extra traps such as has, ownKeys and deleteProperty, because in truth arrays are seldom used in ways other than indexing cells or performing API calls. But if you’d like to go all-out, be my guest!\nImplementing read access\nOK, let’s start with reading. Here is the general idea:\n\nWe get the name of the requested property (which will technically be either a String or Symbol, as these are the only two valid types for property names in JavaScript).\nIf that name expresses a negative integer (which we can’t test on a symbol, so we’ll need to be careful), we convert it to its equivalent positive integer by…\n\nturning it into an actual Number\nadding it to length\n\n\nAs a final step, we delegate to the native implementation of reading a property.\n\nSo how do we go about coding this, exactly? A best practice for proxies is to use the Reflect API, that came along with them and provides a rather low-level access to the native interaction for every trap. So for our get trap, we would use Reflect.get, which has the exact same signature.\nLet’s get coding:\n\nIsn’t life beaaauuuuutiful?\nImplementing write access\nFor writing we’ll go the exact same route, but with the set trap:\n\nAnd voilà!\nWhere can I get that?!\nProxies have been natively supported since Chrome 49, Firefox 18, Opera 36, Edge 12, Safari 10 and Node 6.\nHowever, unlike previous posts in this series, you can’t fallback to transpiling. It is, quite simply, impossible to emulate proxies in ES5. So either it’s native, or you need to hack like crazy with accessors and property descriptors, which is slower, heavier, and most importantly not dynamic at all (properties must be known and wrapped ahead of time, which in our particular example would be either infeasible or extremely cumbersome).\nWant to dive deeper (in proxies)?\nIf this peaked your interest, I explored proxies in-depth (with tons of fun examples and useful ones) in a talk I gave, among other places, at Fronteers 2019 (slides are here).\nOur astounding 360° ES training course also dives deep in them.\nWant to dive deeper (in general)?\nOur trainings are amazeballs, be they in-room or remote online, multi-client or in-house just for your company!\nThat’s a wrap!\nPfew! There you have it: 19 days, 19 posts on JavaScript “nuggets.” I hope you enjoyed the ride, smiled, had a laugh or two, learnt some things, couldn’t believe some of it, and more. Feel free to tweet about it!\nWe’ve got more series planned, about new ES2020 stuff and Node.js ",
		"description": "Proxies are a wonderful feature from ES2015.  This post explores a cool trick you can do with them: negative array indices.",
		"date": 1590105600,
		"image": "/assets/images/art-vid/js-nugget-19.jpg",
    "_tags": ["js","tutoriel"],
		"title": "Negative array indices thanks to proxies",
		"url": "https://delicious-insights.com/en/posts/js-index-proxies/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": "Welcome to our new series of articles and videos: Idiomatic JS!\nThrough 14 installments in the next few months, we’ll help you “level up” your modern JS game so you write code that is more idiomatic: that is, code that embraces the philosophy and intents of modern JS, with modern syntax, approaches… in short, be done with old-school JS!\nHere’s what’s in store for you:\n\nShorthand properties and methods\nComputed property names\nClasses (including all the ES2022 stuff!)\nDestructuring\nRest and Spread (including ES2018)\nDefault values\nTODO Template literals\nTODO Scope, hoisting and declarative keywords\nTODO Binding and this\nTODO Arrow functions\nTODO Optional chaining and Nullish coalescing\nTODO Logical assignment operators\nTODO Numerical separators\nTODO ESM (ES modules)\n\nThere’s always more!\nBesides our wealth of other articles, you may wish to look at our live training courses! In particular, if you like super-deep dives into JavaScript itself, we heartily recommend our 360° ES course!\n",
		"description": "Enjoy our new must-see series of articles and videos: Idiomatic JS!",
		"date": 1640217600,
		"image": "/assets/images/art-vid/art-js-idioms.jpg",
		"title": "Idiomatic JS: meet our new series!",
		"url": "https://delicious-insights.com/en/posts/idiomatic-js/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": "Here’s the first installment of our new series: Idiomatic JS. Today we look at shorthand properties and methods, that let us write object literals in a nicer, more concise way.\n -->\nRepetitive duplication\nHere’s a pretty common case of “old-school” JS object literals:\n\nDoesn’t stuff like body: body, method: method, headers: headers or name: name make your eyes bleed eventually?!\nAnd yet, dang it, this happens all the time in everyday JS: we want to build an object literal where all or most properties have values that are just references to same-name values in the scope. Basically, a: a.\nShorthand properties\nES2015 (the 2015 edition of ECMAScript, which is the official standard for JavaScript) introduced shorthand notations for object literals.\nThe case in point (a: a) is so common that it has its very own shorthand: just write the identifier once. Check this out:\n\nIsn’t that way better? Also note you can opt to use this on a per-property basis: quite often, some of our object literal’s properties don’t fit that pattern, and that’s okay:\n\nShorthand methods\nIn the same spirit, the “traditional” way of declaring “methods” has been this:\n\nThis has had a good run, but it’s time to move on. This syntax is indeed quite baffling to people coming to JS from other languages. This is due to JavaScript not having methods in the classical sense; in particular, functions are never intrinsically bound to objects, so “methods” are just properties whose values happen to be function references.\nThe syntax above has a few drawbacks, big or small:\n\nIt’s confusing when coming from another, classical-OOP language\nIt’s verbose\nBefore ES2015, this resulted in anonymous functions (for lack of an explicit name between the function keyword and the opening parens of the signature)\nShould you try and mess around with our object’s prototype then try to access the parent prototype from such a function, you were sometimes in for a surprise.\n\nES2015 thus introduced shorthand methods, that look like this:\n\nThis is better in many ways:\n\nIt looks a lot more like what you see in popular, classical-OOP languages\nIt’s shorter (and thus has a better chance of being used by devs, betting on typing laziness)\nFunctions are intrinsically named (even though ES2015 auto-names any unnamed function except on-the-fly callbacks)\nThe prototype chain works as intended inside such functions\n\nIn short: there’s zero reason not to use it, go for it!\nEvery other line (of your code)\nSuch scenarios are extremely frequent in idiomatic JS. Shorthand properties, in particular, occur virtually every other line. Here are a few real-world examples from our training courses or production projects:\n\nThere’s always more!\nBesides our wealth of other articles, you may wish to look at our live training courses! In particular, if you like super-deep dives into JavaScript itself, we heartily recommend our 360° ES course!\n",
		"description": "Make your object literals lighter with shorthand syntax for properties and methods, available since ES0215.",
		"date": 1640217600,
		"image": "/assets/images/art-vid/art-js-idioms-shorthands.jpg",
		"title": "JS shorthand properties and methods",
		"url": "https://delicious-insights.com/en/posts/js-shorthand-properties/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Welcome to the second installment of our Idiomatic JS series.\nObject literals are neat, but property names are usually hardcoded: should you need to slap in a property (or method) whose name varies depending on the context, you used to be out of luck.\n -->\n\nBut ES2015 put an end to this misery!\nBut first, we could start by making that example a bit nicer using shorthand properties:\n\nNow, the only reason we had to use a temporary local identifier was that we could then use JS’ indirect indexing operator […] to dynamically access a property of that object. This is because that operator accepts any expression as its operand, then looks up the property whose name matches the result of that expression.\nES2015 introduced computed property names, which let us use that same square bracket-based syntax directly inside object literals. Check this out:\n\nIsn’t that SUPERNEAT™ 😍, folks?!\nMethods too\nThis also works for shorthand methods, by the way. Say the marker property actually needs to be a method that would return some context-based stuff:\n\nSymbols (especially well-known symbols)\nThis is actually why that syntax made it into the language: to allow our objects and classes to implement well-known symbols.\nOK, so far we’ve seen th syntax and its basic usage. The remainder of this article wades into more advanced use cases, if you like to deep dive into the language. We’ll talk about other facets of JavaScript that some deem exotic, so if you’re happy with what we covered so far, feel free to stop now. We’re just showcasing some concrete examples of interplay with other facets of JS.\nCircling back to symbols, let’s take iterability:\n\nUsable everywhere\nAs is often the case with JS, we tried to design something that would play well with other syntaxes, including shorthand methods, the async qualifier and generative methods. This can become hairy!\n\n\n🤒 Yeah, so, if you didn’t quite grasp that one, this is perfectly normal. It is chock-full of modern syntaxes and slightly quirky notions. But if you’d like to dig in, we have a kickass training course.\n\nOne last example\nThis can be useful on many other occasions. Perhaps in combination with “object spread” (officially named “Rest/Spread Properties,” introduced in ES2018 and covered very soon), where this provides for cheap immutability when deriving objects dynamically, as in a hand-written, vanilla Redux reducer.\nSay you have an application state slice that acts like a dictionary of progresses on goals, with keys being goal IDs and values being your progress level; something like this:\n\nNow let’s say your reducer has a PROGRESS_ON_GOAL action with a goal ID and progress increment; you need to be able to derive a new state that leaves other progresses untouched, but does alter that specific goal’s progress. This is a key of your state object you can’t know ahead of time, it could be anything. Enter computed property names!\n\nThat loocks slick… (You’ll be forgiven if you’d rather use Immer, by the way.)\nThere’s always more!\nBesides our wealth of other articles, you may wish to look at our live training courses! In particular, if you like super-deep dives into JavaScript itself, we heartily recommend our 360° ES course!\n",
		"description": "Fed up with having to use a temp variable just so you can add a dynamically-named property to object literals?  Computed property names to the rescue!",
		"date": 1640217600,
		"image": "/assets/images/art-vid/art-js-idioms-computed.jpg",
		"title": "JS computed property names",
		"url": "https://delicious-insights.com/en/posts/js-computed-property-names/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Welcome to the third installment of our Idiomatic JS series.\nAlthough JS has always allowed Object-Oriented Programming (OOP), its initial (constrained) choice of prototypal inheritance has confused most JS adopters. However, ever since ES2015, JS has accrued more familiar syntaxes and feature extensions for defining and managing classes and their instances.\nYou think you already know class, extends, constructor and super? Don’t be so sure… not to mention all the newer stuff. Here’s a solid rundown, full of tasty details.\n -->\nTraditional / classical OOP vs. prototypal OOP: a refresher\nJavaScript and Java were designed at the same time, to be released in the same product (Netscape Navigator 2, late 1995). I already told you that story (FR)… over 10 years ago #NotGettingAnyYounger.\nSun Microsystems’ salespeople pressured Netscape so that JS wouldn’t steal Java’s thunder, as Java was intended to be marketed as the “serious,” “professional” language aiming to replace C++. Because of this, JS was absolutely forbidden to put forth OOP features too similar to Java’s, and Java’s OOP-related keywords were strictly verboten and became reserved words that could not even be used for any identifier or property name (e.g. class, extends, interface, implements, private, public, final…). This is why the DOM has properties such as className (which later percolated in React, for instance, to the sorrow of a great many devs). ES5 (2009) relaxed this to allow such names for property names, by the way. So you can write const obj = { class: 'Foo' } but not const class = 'Foo'.\nStill, for Brendan Eich, who created JS, putting forth a lousy language with no OOP capability was out of the question. This is why he went and looked at the whole breadth of OOP, beyond the minimalistic, “traditional” approach, and opted to use prototypal OOP, drawing inspiration from Self. It now lies at the heart of several programming languages, such as IO.\nWith prototypal inheritance, there are no classes per se, only objects. And any object can serve as a prototype for other objects. An object C can use as prototype an object B, which itself uses object A as its prototype… This is similar to class-based inheritance, but without classes, and perhaps even more crucially with the ability to modify prototype relationships on-the-fly, and the contents of prototype objects themselves, making it all vastly more dynamic!\nIn pratice, prototypal OOP can emulate all of classical OOP, and can even go much further (for languages that go way beyond classical OOP, just look at Ruby or the OG of OOP: Smalltalk). It allows highly-dynamic typing, live updates to prototypal relationships, to prototypes themselves (e.g. as live mixins), singleton objects, eigenclasses and much, much more.\nSo what does vanilla prototypal OOP look like?\nSay we want to create a base class Shape with a couple methods, then a specializing class Square. We’ll intentionally go for old-school, ES3-style code (pre-2009) and forego the then-unofficial __proto__ property. (In ES5 we could have gone with property descriptors and Object.create(), cleaning up that code quite a bit.) It would go something like this:\n\nMy eyes! My eyes!!! (right?) I know… And yet, this provides enormous power. But it is massively confusing when coming from more widespread OOP syntactic approaches.\nES2015’s class syntax\nES2015 (long known as ES6) acknowledged two things when it came to OOP in JS:\n\nPrototype-related syntax was a major hurdle for most people, not to mention underpinning concepts that felt exotic.\nRegardless of the OOP approach, a lot of folks felt victim to common pitfalls in their class code, especially when defining class hierarchies.\n\nTherefore, it started by introducing a new class syntax, that felt immediately more familiar and comfortable. To port the previous example:\n\nHaaaa, that’s better! It feels a lot more like home!\nHidden gems\nIt’s not just about familiar syntax, too. This comes with a lot more flexibility than you’d find in most other languages.\nFor starters, the class itself is an expression, not a declaration. This means you can store a reference to it in a variable, or even return it on-the-fly. So the class could even be anonymous! Look at this:\n\nThis is because under the hood, a class remains a function, the new syntax doesn’t change the underlying implementation based on functions and prototypes:\n\nLet’s one-up that: the extends clause also accepts an expression, not just a fixed identifier. You could extend a class that was provided to you dynamically, or even built on-the-fly through function calls:\n\nAlso note that a class definition being an expression means it is not hoisted, unlike function declarations. Just like the two other declarative keywords from ES2015 (const and let), what it declares isn’t accessible until after it’s run:\n\nNot just syntax, but extra protections\nUsing the modern syntax for defining classes goes well beyond simple writing comfort: JS took this opportunity t",
		"description": "So you think you now everything on ES2015+ classes?  Let’s wager you’ll still learn a ton in this article…",
		"date": 1642377600,
		"image": "/assets/images/art-vid/art-js-idioms-classes.jpg",
		"title": "JS classes with ES2015+",
		"url": "https://delicious-insights.com/en/posts/js-classes/",
		"locale": "en",
		"readingTime": "12 min"
	},	{
		"content": "Welcome to the fourth installment of our Idiomatic JS series.\nES2015 (long known as ES6) brought us a ton of new stuff, including what I like to call the “Holy Trinity:” destructuring, rest / spread and default values. We’ll cover each one in its own dedicated article.\nWith destructuring, there’s this fascinating thing where it seems to take more time to truly “percolate” through our little gray cells, in order to truly be used to its full potential in our code. It often takes months, sometimes even years, by sheer virtue of going through our code again and again, before we truly use it everywhere it’s beneficial. The syntax can also sometimes be confusing. In this article, we’ll do a deep dive into this subject, and try to provide you with enough pro tips to gain the most from it.\nES2015 (longtemps appelé « ES6 ») nous a apporté énormément de choses, dont ce que j’aime appeller la « Sainte Trinité » : la déstructuration, le rest/spread, et les valeurs par défaut. Nous allons les voir chacun dans son propre article.\n -->\nWhy should I destructure?\nThe main goal of destructuring is to grab multiple members at once from a data structure (as in, “in a single statement”). That data structure is necessarily an object: a primitive only holds one value. This helps us avoid multiple declarations or assigments, and spares us from having to index arrays with many “magic numbers” or looking up many properties in sequence from the same source object.\nWe’ll look at the syntax for these use cases in detail, but honestly, which do you prefer? That kind of code:\n\n…or that kind of code:\n\nI know what I like best.\n(If you wonder where all the colons went, you probably haven’t read our article on shorthand properties.)\nWhere can I destructure?\nDestructuring can be used wherever there is an assignment, be it explicit or implicit.\nExplicit assigments\nExplicit assignments are easy to spot: there’s an assignment operator (=), which can happen within a declaration or expression.\nMost folks think you have to be in a declaration (e.g. with const, as const is the new var). Something like this:\n\nAnd indeed, this is the majority case. However, you can absolutely destructure towards existing identifier, as long as they can be rassigned (hence not const-declared). Like in the following writer accessor, for instance:\n\nIn the code above, we destructure directly towards property of the current object (this), which don’t need to be declared ahead of time.\n\nYou might be confused by that semicolon just before the destructuring? This is because, in the semicolon-free style (that we’ve favored for 10+ years), thereis an edge case that warrants a semicolon ahead of an opening square bracket: JS might believe this is a dynamic indexing (square bracket operator, as in items[2]) on the expression ending the previous code line.\nAs we’re at the beginning of a block here, there is no ambiguity and it would work just fine without the semicolon; but we automatically format our code with Prettier, and it tries to help us avoid issues should we later insert a line of code between the opening of the block and our existing line. By forcing the semicolon from the get-go, we’re safe and it reduces later Git diffs should a line be inserted above.\n\nImplicit assigments\nImplicit assigments come from the semantics of the language and don’t need the = operator. This boils down to two scenarios:\n\nParameter names in a function signature (they are implicitly assigned arguments at call time), and…\nContents of a destructuring (every element is implicitly assigned a member of the source object).\n\nCircling back to syntax for a bit, here are examples of each case:\n\nTwo types of destructuring\nThere are two types of destructuring:\n\nname-based: we’re interested in the source object’s property by name, so order doesn’t matter. The source can be any object at all.\nposition-based: the source object must then be an iterable\n\nIl existe deux types de déstructuration :\n\nnominative : on s’intéresse aux propriétés de l’objet source par leurs noms, et l’ordre n’a donc aucune importance. La source peut alors être n’importe quel objet.\npositionnelle : l’objet source doit alors être un itérable (the most famous ones being arrays), and we’re interested in properties by their “position,” or more accurately by their order in the iterability sequence of the source object.\n\nIn both cases, we destructure an object, which means you can’t destructure null or undefined, which are not (even if, for lousy legacy reasons, typeof null === 'object' 😩).\nName-based destructuring\nName-based destructuring is wrapped by the same delimiters that denote an object literal: curly braces (or curlies for short, { … }). Here’s a destructuring declaration:\n\nAnother example in a function signature:\n\nAs you could deduce from the underlying semantics (as in the “BEFORE” segment in the first example below), when the source property doesn’t exist, the created identifier will be undefined.\nWe sometimes need to use “local” ",
		"description": "Destructuring has quickly become a staple of modern JS code… but are you sure you understand it properly?",
		"date": 1643587200,
		"image": "/assets/images/art-vid/art-js-idioms-destruct.jpg",
		"title": "JS destructuring in ES2015+",
		"url": "https://delicious-insights.com/en/posts/js-destructuring/",
		"locale": "en",
		"readingTime": "7 min"
	},	{
		"content": "Welcome to the fifth installment of our Idiomatic JS series.\nIn our article about destructuring, we tackled what I like to call the “Holy Trinity”: destructuring, rest/spread, and default values. In this one, we’ll cover rest/spread, and our next article in the series will cover the last member of the gang.\nMost people were quick adopters of Rest / Spread as introduced in ES2015 (that is, on iterables only). Still, it’s all too easy to overlook some of its finer points, and too few developers routinely use the ES2018 extension to it that works on any object, making vanilla-JS cheap immutability a lot easier. Let’s review all of this together.\n -->\nRest and Spread: what are they for?\nThe idea is to pass a series of values (or key-value pairs) to an common container (e.g. Array or a fresh object), or the other way around.\nThere are many use cases, not all of them immediately apparent, especially when you’re not a seasoned-enough JavaScript developers to have encountered all the relevant scenarios.\nIt’s a single operator for both roles: a ... in prefix position (that is, before the identifier), e.g. ...items. Whether it’s a rest or a spread depends on the location of that in the code:\n\nWithin a declaration, it’s a rest. This could be a function signature or a destructuring. A rest always appears at the end of the list.\nWithin an expression, it’s a spread. For instance arguments in a function call, values in an array literal or object literal. A spread can appear anywhere in the list (not just at the end), which can even contain multiple spreads.\n\nPosition-based rest\nAs the name implies, a rest lets you accumulate… the rest.\n\nYou can use it in function signatures or position-based destructurings ('told you these were best friends).\nThe goal is to accrue all remaining elements in the list in an actual Array. This is useful mostly for two things:\n\nImplementing variadic functions, that is, functions that allow a variable number of arguments. This is an especially good fit for math- or aggregation-related functions (e.g. concatenation, combination towards a single result).\nExtracting the first items in an iterable (usually to apply a specific processing to them) and keeping the remaining items as a whole.\n\nLet’s say you have a function that computes the average value of its arguments, regardless of their number. We definitely could use a loop, which could do with the good ol’ arguments of traditional functions:\n\nIt might be neater to go with Array#reduce() though. Except arguments isn’t an Array1, so tough luck. Not only that, our function signature doesn’t convey that it is variadic. Let’s try with a signature rest instead:\n\n(1: it’s an instance of ~StupidCustomTypeFromHell~Arguments).\nYou should know that before rests, doing that in Array mode looked pretty wild 🔥:\n\nAs a bonus, rests can also be used for arrow functions, which isn’t true for arguments (as we’ll see in our upcoming post about arrow functions #teaser #subscribe #goodForYourSkin).\nWe can also use that within position-based destructurings, as in the code below:\n\nA position-based rest will always produce an array, even if it’s empty. There are no edge cases such as null and undefined, which is #chefskiss design, as edge cases are the worst.\nPosition-based spread\nPosition-based spread can be applied to any iterable, and consumes it entirely: it’s equivalent to putting, at that very code location, every single value in the iterable, separated by commas. You can use within function calls or an array literal.\nAs a reminder, tons of stuff (#jargon) are iterable: arrays naturally, but also NodeLists (you know, the kind of thing document.querySelectorAll() returns, among others), Maps, Sets, and even Strings (which is super cool).\nFor instance, this comes in handy when you have an array of arguments to be passed to a function, which expects them as individual arguments. This could let us use push as a kind of mutative concat:\n\nYou can spread as many times as you need, including several times the same source, as a spread isn’t supposed to alter its source.\n\nPosition-based spread is also a neat way of deriving a new array from an original one (or any iterable for that matter):\n\nAs position-based spread works on iterables, applying it to anything else, including null and undefined, will throw a TypeError.\nToday’s protip\nI see too many folks using position-based spread as the sole contents of a fresh array literal, in order to do a shallow copy of the array, or to convert an iterable to an actual array, like so:\n\nSo yes, ok, that does work, but you should favor the standard library API Array.from(). Not only is it easier to read, it also lets you pass a mapping function that’ll be used on-the-fly, instad of doing a second traversal:\n\nName-based rest (ES2018)\nES2018 standardized a proposal titled Rest/Spread Properties, and that’s awesome.\nName-based rest is only possible within a name-based destructuring. It is quite similar in concept to a position-based rest ",
		"description": "Rest / Spread is kind of a cousin to destructuring, and has become a must in modern JS… Learn how to unleash its power on objects and iterables!",
		"date": 1644796800,
		"image": "/assets/images/art-vid/art-js-idioms-rest-spread.jpg",
		"title": "JS rest and spread in ES2015 / ES2018",
		"url": "https://delicious-insights.com/en/posts/js-rest-spread/",
		"locale": "en",
		"readingTime": "6 min"
	},	{
		"content": "Welcome to the sixth installment of our Idiomatic JS series.\n\nThis concludes our “Holy Trinity” part: after destructuring and rest and spread, here come the default values!\nDefault values have been around for a long time in many popular languages, often well before JavaScript added them in 2015. It was well worth the wait thoough: JavaScript default values pack a lot more punch!\n -->\nWhat are default values useful for?\nWe use them to formalize default data values and to limit when they are applied.\nThey can be used anywhere there an implicit assignment, which means:\n\nFunction signatures (parameters are implicitly assigned arguments at matching positions), and\ndestructurings (their elements are implicitly assigned matching elements from the destructured source).\n\nAs you likely expect, the operator is =, and they are only triggered on undefined.\nHow did we do before them?\nThis is very different from how things were with the traditional hack for default values, that most of the time relied on the **logical OR** (||), hence on boolean coercion 😅 of the original value:\n\nWith that older way, any falsy value was ignored and led to the default value. This would cause issues when some of the falsy values were legit: for instance, in the example above, a times of zero (0 || 1 is 1, as 0 is falsy) or an empty separator ('' is falsy too).\nTrue, it could be useful to handle in one go many values deemed invalid (such as undefined and null, perhaps NaN too), which won’t be possible with the default values syntax, but most of the time, focusing on undefined is more appropriate.\nReadability and code completion\nBesides, a bit like position-based rests within a signature, formal default values provide extra information within the signature about the behavior and expectations of our function, in addition to trimming its early code:\n\nUsing this comes with significant benefits:\n\nThe signature is more informative and explicit.\nCompletion systems and argument info popups in editors and IDEs often surface that information.\nThe beginning of the function is no longer cluttered with “argument massaging,” if only to handle their default values.\n\nAny expression, even backrefs!\nOn the right-hand side of the equal sign, you can type any expression at all, including ones with function calls, which is seldom possible outside of JS. Until we get throw expressions, some use that to make elements “mandatory” at runtime by providing a default value that throws an exception:\n\nWithin the default value’s epxression, you can even re-use terms from earlier in the list. Again, few languages let you do that in their default values. Here’s a sweet example:\n\nHere the second argument’s default value is aligned to the first argument’s text length (well, except if we start playing with Unicode, but we could easily fix that). When I call banner('hello'), within the function, the line argument will be '-----' by default. Lovely!\nAnother little example I love comes from a time when I had to write a function that extracted the opening and closing delimiters of tagged content and return an array of the opener and closer, except when they were identical instead of symmetrical (as are &lt;&gt;, (), [] and {}), in which case the array had to be single-element:\n\nSomething though I would want to use both cases in a unified way, with the opener and closer, even when they were identical. The ability to use backrefs for default values let me write that in a concise, intentional way:\n\nI love it 😍!\nBest practices of signature design\nIn May 2020, in our 19 JavaScript nuggers series, I had discussed how to cleanly define optional named parameters; as it relies on name-based destructuring and default values, allow me to reiterate here.\nNobody likes that kind of function signatures:\n\nSignatures with many parameters that are unintuitive and use the same data type are practically unusable: they pose a dire maintenance threat. We much prefer using named parameters. JS doesn’t have dedicated syntax for that (unlikeRuby, Swift, Kotlin, PHP 8 or C#, to name only these), but we can get reasonably close with name-based destructuring:\n\nThis makes our calls a lot nicer to read, they sort of “self-document:”\n\nThat’s a lot better!\nThat being said… what if all arguments are optional? Either because they all have an explicit default value, or because those that don’t wouldn’t be used when undefined? As our signature stands, we have a problem. Check it out:\n\nBesides being suboptimal for repeat calls (it instantiates a fresh Intl.DateTimeFormat every time under the hood), should we wish to call this function in “100% defaults” mode, how would we? We’d like to be able to write this:\n\nThat breaks because we try to destructure our argument, and as we didn’t pass any, we attempt to destructure undefined, which is not allowed. We would have to swallow this down:\n\nTo enable an empty call, all we need to do is supply a default value for the argument itself! Look at the end of the signature below",
		"description": "Default values might seem like a simple topic, but JS seized that opportunity to get more powerful ones than most, making it well worth the wait!  Read all the juicy bits in this post!",
		"date": 1644969600,
		"image": "/assets/images/art-vid/art-js-idioms-default-values.jpg",
		"title": "JS default values in ES2015+",
		"url": "https://delicious-insights.com/en/posts/js-default-values/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": " -->\nThe standard Git diff prints tons of information, some are not essential and make what’s printed harder to read. The key issues are that we can’t clearly see file boundaries (where does one end, and the next one start?) and that single-line changes are printed as two separated lines (delete/add), so we struggle in analyzing what changed on a specific line (it’s kind of like a “spot the difference” game).\n\nWe looked at what options and configuration the diff command offered to improve that display, but that wasn’t good enough. So we searched for a third-party tool that might help in the console. We wanted it to be easy to install and use. This is how we found diff-so-fancy.\nThere are many ways to install it easily:\n\nnpm;\nHomebrew;\nNix;\nArch repository;\nppa:aos for Debian/Ubuntu Linux;\na clone of the Git repository.\n\nThe most portable way to install it is through npm:\n\nOnce it is installed, you must instruct Git to use it; add this to your global config:\n\nYou can also customize colors and some of the behavior, but the default settings are great already.\nWith this set up, you now get a display like this one:\n\nHappy diffing 😉!\nProtips galore!\nWe got tons of articles, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "See differences in files, line by line, without any visual pollution",
		"date": 1661644800,
		"image": "/assets/images/art-vid/art-protip-git-diff-so-fancy_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: a nice and efficient diff in the console",
		"url": "https://delicious-insights.com/en/posts/git-protip-diff-so-fancy/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": " -->\nThe log is a Git tool that displays the commit history of a project. We usually use it to:\n\ndisplay the latest commits, to know how far along we are in our work;\nvisualize the current branches and their respective histories.\n\nProblem is, the “regular” log looks like this:\n\nIt displays commits from most recent (top) to oldest (bottom), without any branch data, but with all the details of every commit:\n\nfull 40-character identifier,\nfull message (might be several lines),\ndate and name of the committer,\nand line breaks to make it breathe.\n\nIt’s rather unwieldy. As a result, many people turn to graphical interfaces (therefore adding yet another tool). And yet, Git log features a ton of options to customize its display. I’ll skip the details here to focus on our magic alias! 🧙‍♂️\nFirst, let’s decide on the data and format that we would like to have in our enhanced display:\n\nConcise: first line of the commit message, who produced it and when, abbreviated identifier (to make it easier to re-use with other commands);\nVisual: clear graphical sequencing of commits, labels for branches, tags and our current position (HEAD).\n\nThis is achieved by the following options:\n\nI’m guessing you won’t want to type that every time, so you might as well set up an alias for it. At Delicious Insights, we named it lg, which is easy to type and close to log. To set it up, just copy-paste the following line into your terminal:\n\nThe rendering is much more pleasant and useful. You can still use the options. For instance, you can extend the visualisation to all branches by doing a git lg --branches (assuming the same history as the earlier example).\n\n\nTip\nYou can configure Git to always use abbreviated identifiers:\ngit config --global log.abbrevCommit true\n\n\n\nMore tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "The classic Git log (for displaying the commit history) does not fit the standard use case. Rather than using a graphical interface, let’s use an alias and a custom log.",
		"date": 1661731200,
		"image": "/assets/images/art-vid/art-protip-git-lg_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: a graphical log that rocks!",
		"url": "https://delicious-insights.com/en/posts/git-protip-lg/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": " -->\nAh, carelessness, oversights and poorly written messages… All those situations where you created a commit only to realize that you screwed up!\nFortunately, the --amend option of the commit command is there to save you! 😮‍💨\nConceptually, Git will take one step back and one step forward for you. It will revert to the state it was in before committing (equivalent to git reset --soft HEAD~1), and will automatically commit again with all staged files (including the new ones, if you had added to the stage before running the --amend). Basically, it will undo your commit and create a new one instead, allowing / including intermediate changes:\n\nadding / removing files;\nupdating metadata (usually the commit message).\n\n\nThe command is quite simple:\n\nLet’s look at the main use cases.\nAdding a forgotten file\nThe most common case is forgetting to add a file to your commit, most of the time because it was untracked and you just went with the -a option that only adds already tracked / known files to Git: git commit -am '…'.\nTo fix this, you’ll have to:\n\nadd the file(s) to the stage: git add &lt;les-chemins&gt;;\ncancel and replace the commit: git commit --amend.\n\n\nTip\nHere is an alias that lets you fix the commit in-place after adding the file to the stage:\ngit config --global alias.oops 'commit --amend --no-edit --no-verify'\nThe --no-edit option keeps the message as-is without going through the editor again. The --no-verify option skips the pre-commit and commit-msg hooks.\n\nRewording the commit message\nLet them who never made a typo or forgot an important reference like a ticket number in a commit message cast the first stone! Personally, I’m terrible at this.\nWhen that happens, you’ll have to:\n\ncheck that your stage is empty (you don’t want to add stuff to the updated commit);\nrun the git commit --amend command, possibly with the message “on the fly” if you want to rewrite it entirely (git commit --amend -m 'New message').\n\n\nTip\nWhy not use an alias whose name would more clearly tells thats its for a rename?\ngit config --global alias.reword 'commit --amend'\n\n\n\nMore tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "You just screwed up your last commit and want to fix it? The \"git commit --amend\" command is your friend!",
		"date": 1664150400,
		"image": "/assets/images/art-vid/art-protip-git-amend_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: fix last commit with \"--amend\"",
		"url": "https://delicious-insights.com/en/posts/git-protip-amend/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "\nUpdating our dependencies…\nSo, let’s assume we care about technical debt, and are therefore careful not to let out dependencies fall back too much… Security, performance, new features: there are many reasons to keep our dependencies up-to-date. Which is why it’s good idea to set a bit of time aside on a regular basis to audit “how far back” our installed deps are, and update them properly.\nnpm outdated and npm update are rather lackluster\nOut of the box, npm features an outdated subcommand, that scan all the deps in our package.json to display those that could use an update. You’ll see your currently-installed version, the highest one you can get whilst honoring your semantic versioning constraints, and the highest one available in the registry (which may require a major version bump).\nIt is desirable to call that subcomand with the --long option to see what category each dependency falls into (production, development, etc.), along with its homepage URL for easier access to their changelogs.\n\nThe modules that can be updated within their current semantic versioning constraints (roughly, those with newer versions available within their current major) are listed in red (as it is highly desirable, and very low-risk, to update them).\nModules that are “up-to-date within their major” but can still be updated to a higher major version are listed in yellow (it is still likely desirable to update them, but this may include breaking changes with regard to our codebase our environment). You absolutely should check out their URL and look for the changelog, usually available in a CHANGELOG.md file or through the Releases page on GitHub, in order to make sure whether you need to update your code or environment to use that version, or whether this needs to be postponed for a while.\nYarn and pnpm feature the same subcommand, with a nearly-identical display.\nThis is all nice and well, but it doesn’t perform the update. So we have another subcommand, npm update, that deals with it.\nBUT!\nCalled with no arguments, it will update all the “red” modules to their latest compatible version. We don’t always want this wholesale approach and may require a more granular update; we would then need to explicitly list every module we want to update, which is tedious. Besides, the default behavior won’t update our package.json file to reflect these new “minimum requirements” for our modules, which is a shame: we would need to add the --save option.\nFinally, this doesn’t update the “yellow” modules, that would bump their majors. I then usually go with explicit arguments to npm install.\nIn summary, the ergonomics of this are not good.\nSay hi to npm-check! 🥳\nThis is a small third-party tool that will (when we enable its update mode):\n\nScan our deps for possible updates\nGroup these by update category (patch, minor, major, non-semver — which is below-1.0.0 versions)\nProvide an interactive UI for selecting the modules we wish to update\nRun selected updates by dependency type (production, development, peers, etc.)\nSave new minimum requirements in our package.json\n\nYou could install it globally, but I tend to do a temporary, on-the-fly install when I need it with npx:\n\n\nI should also mention that there is another popular tool for this called npm-check-updates, but I find its ergonomics to be far less pleasing.\nA word about Yarn and pnpm…\nBoth provide the same outdated subcommand, but they also come out-of-the-box with an interactive update feature, with UX pretty much identical to npm-check:\n\nyarn upgrade-interactive --latest (you first need to install Yarn’s interactive-tools plugin though)\npnpm update --interactive --latest\n\nProtips galore!\nWe got tons of articles, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "Easily keep your JS deps in sync with npm-check!  So much nicer than npm outdated + update…",
		"date": 1664323200,
		"image": "/assets/images/art-vid/art-js-protip-npm-check.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: npm-check",
		"url": "https://delicious-insights.com/en/posts/js-protip-npm-check/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "For goodness’ sake 🙏, tell me you don’t use git blame! Apart from its implied intent, that command will not display the information you’re looking for, but worse, you’re probably going to misinterpret it and go heckle an innocent colleague (adding insult to injury 😡).\nWhat does this command do? It shows what revision and author last modified each line of a file but without displaying what changed!\nThis means that your colleague may have removed whitespaces or reindented the lines you were analyzing, so they ’re not the author of the bug you found. And you’ll look like an idiot when it turns out it was you who failed and actually introduced the bug.\nThis brings us to the correct approach: tracking changes to the line range with git log -L. There are other options like -S or -G you may be interested in. You can learn more about it in that other article.\nWant more tips and tricks?\nWe’ve got a whole bunch of articles, and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "Do you think \"git blame\" is a good idea? You’re wrong!",
		"date": 1665360000,
		"image": "/assets/images/art-vid/art-protip-git-blame_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: blame, please don’t!",
		"url": "https://delicious-insights.com/en/posts/git-protip-dont-blame-me/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": "Have you ever wanted to track only code changes made within a function? That would be great! We could then find where a bug comes from or get a sharper view of the work done along the way.\nMaybe you already knew how to get a list of the commits that changed a specific file:\n\n\nLet me introduce its supercharged version with the -L option:\n\n\nWith that specific option you can only track a single file but above all, you can focus on a specific part, here using a function name. Git will then only print the changes to that function, commit after commit.\nEvery once in a (long) while, you will likely get a little bit more than the function block: it may go a bit beyond the end of the function. That why I favor another syntax that expects line numbers as block delimiters:\n\nIt will scan the text block within these lines and adjust that range when walking the history, so the relevant content is printed everytime. That syntax is more powerful because you’re not limited to function analysis (or to curlies-delimited functions) 🌈🦄!\nWant more tips and tricks?\nWe’ve got a whole bunch of articles, and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "Use \"git log -L\" to track changes only within a code block, figure out who actually introduced a bug, etc.",
		"date": 1665360000,
		"image": "/assets/images/art-vid/art-protip-git-log-l_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: track function changes with \"log -L\"",
		"url": "https://delicious-insights.com/en/posts/git-protip-log-l/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": "How many times did you forget to add a file or change to a commit, only to discover your mistake later on (after a few commits)?\nSince I’m a strong proponent that atomic commits are key to project quality and automation (see Conventional Commit and Semantic Release, for instance), I do my best to fix these commits using Git interactive rebasing. I strongly recommend you do the same. You don’t have to be afraid of Git rebase, you just have to learn how to use it 😉.\nMy goal here is to help you do this in a quick, smooth, painless way.\nYou may already be familiar with the git commit --fixup option (or --squash) that creates a commit with a special-format message about our intent to fix another commit. Using that, when you run a properly-configured interactive rebase, Git will automatically move that commit line as a fixup next to the one you’re fixing.\nStep by step\nAdd the missing files to the stage: git add ….\nAsk Git to create the fixup commit: git commit --fixup=&lt;commit-ref-to-be-fixed&gt;.\nThen run an interactive rebase, starting one commit before the one you fixed: git rebase --autosquash -i -r &lt;fixed-commit-ref&gt;~1.\nGit opens your editor with the list of actions to run, but with the fixup already in the right spot. There’s nothing more to do except save and close the file, that’s it !\n\nYou might want to check your log to confirm everything’s fine. You may also want to check that all the intended files were recorded by the fixed commit git show --name-only &lt;new-commit-ref&gt;.\nAs this is a recurring thing for me, I made a “magic” 🧙‍♂️ alias that calls both commands in sequence (to fix a single commit):\n\nEt voilà !\n\n  The Git config you may love 💕\n  There are many useful aliases you can add to your Git configuration. You can check out the one we’ve built through years of experience to learn more.\n\nMore tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "You can quickly \"update\" an old commit thanks to \"commit --fixup\"",
		"date": 1665964800,
		"image": "/assets/images/art-vid/art-protip-git-autofixup_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: easily add missing changes to an old commit",
		"url": "https://delicious-insights.com/en/posts/git-protip-autofixup/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "Just as I sometimes forget files in a commit, I also sometimes botch commit messages only to find out about it a while later (after a few more commits). My most common oversight is probably forgetting to link the issue the commit refers to.\nLater, but not too late!\nDon’t tell me you’re thinking: *“Never mind, I’ll pop open the UI and fill it in by hand!” or that you’re giving up! Maybe that’s because you don’t know about interactive rebasing, or are afraid to use it 😨. Honestly, you shouldn’t be, with a little learning this command proves to be a super valuable ally!\nApart from the rebase thing, do you know that you can create a commit that expresses the intent to change the message? I bet you don’t! Fortunately, I’m here 😁 to help. Let me introduce the git commit --fixup reword:&lt;commit-ref-to-fix&gt; command. This is a bit of a mouthful but who cares, we’re going to wrap it with a nice alias that looks kind of like a magic incantation 🔮. Introducing autoreword:\n\nWhat happens if you run this command?\n🌀 An evil wormhole opens, releasing the flames of hell that reduce this world to ashes 🔥! (Actually no, we don’t even need to use magic for that 😭)\nBut seriously, Git will open your editor with a message starting with amend! … followed by the first line of the commit message you want to fix, then a line break and again the (full) commit message.\n\nYou can then change the message (but keep the very first line as-is).\n\nNow you can save and close the file.\nGit will then create the fixup commit and our alias will automatically run the interactive rebase, starting from the faulty commit’s direct ancestor. Then setting GIT_EDITOR environment variable temporary to true tells Git to not open the editor and directly runs the rebase. If you’re curious about what’s happening then, you can remove that GIT_EDITOR=true part so you can see that the fixup commit is already at the right spot in the list of actions to run, prefixed by fixup -C.\n\nThe rebase then runs (without conflict) and you can verify your log to see that the commit message was updated!\nYou may see in the actions list some keys like label onto and reset onto. This is a because the rebase is called with a --rebase-merges option that preserves your local merges while rebasing. You don’t need to worry about it 😌.\nMore tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "What about an alias that would to the job for you?",
		"date": 1665964800,
		"image": "/assets/images/art-vid/art-protip-git-autoreword_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: quickly rewrite an old commit message",
		"url": "https://delicious-insights.com/en/posts/git-protip-autoreword/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "\nWe quite often need to grab one or more items from the tail end of an array or string. Historically it’s been quite a pain in the neck, but things have changed!\nOld school…\nLet’s start with time-honored recipes. Although on this one, we’re likely better off forsaking tradition and embracing newer ways!\nYou may be aware that the slice() method (on Array and String) and splice() method (on Array) allow negative indices: they behave just as if they were added to length, so that −1 refers to the last item, −2 to the next-to-last one, and so forth:\n\nAlas! Not only does Array#slice() return an array (when we usually want the item itself), forcing us to blurt out an ugly items.slice(-1)[0], but negative indices wouldn’t work anyway with the indirect indexing operator [], which is our go-to approach when grabbing an item from an array (or perhaps a string):\n\nThe reason behind this is simple: the semantics of [] have nothing to do with positions. This operator accepts an expression resolving to a property name, and either the property exists under that name (we then get its value) or there’s no such name for a property (we then get undefined).\nThis is why we always end up with that train wreck:\n\nThanks, but no thanks.\nNegative indices by wrapping with a proxy?\nES2015 gave us ES proxies (check out my talk at Fronteers 2019 or the MDN docs).\nIf we can access an object wrapped with the relevant proxy, we suddenly gain the ability to perform negative indexing! As to whether you’d like doing that whenever you return an array, to allow that kind of indexing for your calling code, well, that’s up to you.\n\nLet’s see what that feels like:\n\nat() on built-in iterables\nOK, but we can’t always afford (or don’t want) to wrap our arrays with bespoke proxies. So what’s a developer to do?\nWell, ES2022 brought us the at(index) method on all built-in position-based iterables: Array, types arrays and String.\n\nfindLast() and findLastIndex()\nES2023 will introduce two new helper methods on Array and typed arrays, that will simplify searching for stuff from the tail end of an array, as opposed to going from the start.\nAfter all, we’ve had lastIndexOf() and reduceRight() since ES5, but ES2015 failed to provide “from the tail end” variants of its find() and findIndex() novelties. It was ample time we finally got around to this, as we’d been left stranded in manual-numeric-loop hell or having to first do a costly (and mutative!) reverse() ahead of search!\n\nfindLast() and findLastIndex() won’t be official until June 2023, but they’re already natively supported everywhere (Safari 15.4, Firefox 104, Chrome/Edge 97, Node 18, Deno 1.24), and for other runtimes, you can easily polyfill it with core-js (via Babel / TS or not).\nProtips galore!\nWe got tons of articles, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "Check out the many ways — some brand-new! — to grab stuff from the end of an array in JavaScript.",
		"date": 1666137600,
		"image": "/assets/images/art-vid/art-js-protip-last-items-en.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: Grabbing last items from an array",
		"url": "https://delicious-insights.com/en/posts/js-protip-last-items/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "\nHa, the joys of formatting dates and times. Sure, we could use a single, digit-based format and call it a day. This might be enough (especially for tech formats), but it’s certainly not ideal, perhaps even way too ugly when we’re displaying these to humans, with their cultures and formatting customs: their locales.\nAfter all, when an English-language website displays 08/12/2022, how can we be sure they mean August 12 or December 8? Without knowing what locale they went with, there’s no way to be certain.\nLet’s review the options we have for doing this right without bloating our app’s JS.\n\nLegacy Date methods\nFirst there was the toString() method of Date. It produces an abbreviated US format with date, time and time zone, using your browser profile’s default time zone settings. The shape of it is quite reminiscent of Internet protocol headers, as defined most recently by RFC 5322:\n\nI intentionally left the display from a French-language browser profile here: notice how the time zone’s name uses this locale, whilst everything else hardcodes English abbreviations. #CaptainConsistency\nThis format is actually the concatenation of, among other things, the output of the toDateString() and toTimeString() methods, that have been here all along too, just like toGMTString() (that got deprecated in favor of toUTCString(), as UTC replaced GMT many years ago). ES5 also threw in the very useful toISOString(), that uses the ISO8601 standard’s format, used for instance by JSON and many other serialization standards for dates/times.\n\nYup, you read that right: toUTCString() uses a time zone name of… GMT 🤦🏻 sigh All in all, these aren’t worth much when trying to display dates and times to humans.\nSo we use third-party libraries then?\nHistorically, in order to properly format dates and times, we had to use third-party libraries.\nThe most famous one in this space, still in widespread use despite its advocating for years to use something else, is Moment.js. That’s a wonderful way to cargo-cult 300KB minified in our JS bundles for nothing (even if you setup your bundling to only include locales relevant to you, you’re looking at a minimum 70KB weight).\nFor a few years now, Luxon (Moment’s heir library) and date-fns have taken over, even if their core purpose isn’t so much formatting as manipulation (time distances, moving forward or backward in time, etc.). These two are actually entirely based on the Intl API (ECMA-402 standard), which is part of JavaScript’s standard library.\nAs a result, when it comes to formatting dates and times, the value added by these libraries is next to zero.\nIntl.DateTimeFormat\nWe’ve had the Intl API for quite a few years now (standardized by the same committee working on JavaScript and JSON), and its entire purpose is to provide as full a JavaScript access to established data formatting as possible, for all standardized locales.\nDates and times, number formatting, lists, pluralization, collation / sorting… Every locale has their own conventions, habits, customs… And all of this is actively maintained in the Common Locale Data Repository, or CLDR, which is itself part of a larger project called the International Components for Unicode (ICU).\nAs you probably expect, this adds up to a large dataset, available on all OSes through one or more system libraries. On Ubuntu for instance, the libicu66 contains about 30MB of data. This is more or less what you’ll find on other OSes, and obviously you don’t want to burden your JS bundles with that much data!\nThe Intl API provides us with direct access to most of the OS’ CLDR, which is awesome 😀\nWhen it comes to dates and times, we’ve got two classes in there, the most well-known one being Intl.DateTimeFormat. The constructor has the following signature:\n\nYou can specify one or more locales, by decreasing priority. These are BCP47 strings with various levels of detail (going from simple generic language codes, such as 'fr' for generic French, to very detailed ones such as de-DE-u-co-phonebk, which is the phonebook sorting order variant for German). I would advise you always include a country variant (e.g. fr-FR for French in France), but going beyond that is usually overkill.\nThe system will use the first locale it can fully support based on the underlying CLDR.\nBesides locales, you can specify options, and boy are there many options! We won’t cover them all (this is a protip after all, not a comprehensive tutorial on that class), and in particular will set aside options about specific date and time segments (e.g. weekday, day, month, year, hours, minutes, seconds, etc.) or altering default locale behavior (hour cycles, etc.).\nThat said, we’ll focus on very useful options for following a locale’s formatting customs:\n\ndateStyle for… the date part\ntimeStyle for… the time part\ntimeZone for, well… the time zone\n\n#CaptainObvious\ndateStyle and timeStyle\nThe dateStyle and timeStyle options allow 4 possible values: 'short', 'medium', 'long' and 'full', by increasing le",
		"description": "Find out how to easily and cleanly format dates and times in JavaScript whilst honoring locale preferences, all without resorting to third-party libraries!",
		"date": 1666742400,
		"image": "/assets/images/art-vid/art-js-protip-date-formatting.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: Formatting a date/time according to locale",
		"url": "https://delicious-insights.com/en/posts/js-protip-date-formatting/",
		"locale": "en",
		"readingTime": "5 min"
	},	{
		"content": "\nYesterday we covered formatting dates and times, today we’ll learn how to format ranges of dates and times, in a clean and spiffy way!\nformatRange(), the GOAT\nToo few people noticed that DateTimeFormat was later enhanced with range management, which is super handy when formatting meetings, events, etc. where their timing has a beginning and an end.\nI myself had to implement that kind of behavior (e.g. in Ruby) and believe you me, for it to work well and look nice, it’s a pain in the neck: depending on whether both boundaries are on the same day, or month, or year, etc. means we don’t want to repeat every time component. Not to mention short formats (using en dashes in French for instance) vs. long formats (e.g. “from…to…”). This quickly spirals out of control.\nBut no longer!\n\nNotice how German has a massive format variation depending on whether boundaries are on the same day or not? Honestly that kind of detail management gives me shivers!\nWant to dive further?\nYou’re in luck: besides yesterday’s protip on date/time formatting, we’ve got a third one scheduled for tomorrow on time distances!\nWhere can I use that?\nWell, everywhere. Taking into account everything in this 3-protips series, you’re good to go with any browser released in, say, the past 3 years, plus Node 13+ and Deno 1.8+. So go for it!\nProtips galore!\nWe got tons of articles, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "Find out how to easily and cleanly format date/time ranges in JavaScript whilst honoring locale preferences, all without resorting to third-party libraries!",
		"date": 1666828800,
		"image": "/assets/images/art-vid/art-js-protip-date-ranges.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: Formatting date/time ranges",
		"url": "https://delicious-insights.com/en/posts/js-protip-date-ranges/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "\nAfter a protip on formatting dates and times two days ago and another one on formatting date/time ranges yesterday, let’s wrap that topic with today’s protip on formatting time distances!\nIntl.RelativeTimeFormat\nLesser-known than DateTimeFormat, RelativeTimeFormat has been available for a good while too, allowing us to express times relative to now, as time distances in the past or future.\nUnfortunately, it doesn’t use Date objects as “targets,”, but a quantity and a standard unit, which remains quite useful.\nThe constructor allows the same locales argument as DateTimeFormat (and the entire Intl API, really), and we can use up to two options:\n\nstyle can be 'long' (default value, showing the full unit name), 'short' (abbreviated unit) et 'narrow' (further abbreviation if the locale features it)\nnumeric can be 'always' (default value, resulting in numeric distance no matter what) or 'auto' (which I like more, using colloquial phrasing for close-enough distances).\n\nCheck it out:\n\nAmazeballs. 😎\nWant to dive further?\nYou’re in luck: we had a protip on date/time formatting, and on date/time range formatting already!\nWhere can I use that?\nWell, everywhere. Taking into account everything in this 3-protips series, you’re good to go with any browser released in, say, the past 3 years, plus Node 13+ and Deno 1.8+. So go for it!\nProtips galore!\nWe got tons of articles, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "Find out how to easily and cleanly format date/time distances in JavaScript whilst honoring locale preferences, all without resorting to third-party libraries!",
		"date": 1666915200,
		"image": "/assets/images/art-vid/art-js-protip-relative-dates.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: Formatting a time distance",
		"url": "https://delicious-insights.com/en/posts/js-protip-time-distances/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": "\n\nSometimes we end up with arrays… within arrays. Perhaps a .map() went astray, or we had a recursive processing, or nested data sources: reasons are many, and oftentimes such a nested data structure is fine with us.\nBut when we want to process that as one sequence, without depth-first traversals, how should we go about it?\nUntil recently, we had to turn to third-party libraries, perhaps Lodash and its .flattenDeep() or .flatMapDepth() functions. But ever since ES2019, this has been available straight in JavaScript’s standard library!\nFlattening, more or less\nArrays offer a .flat() method, that accepts an optional depth. It defaults to 1 (one), thus flattening only one level down:\n\nNaturally, to flatten all the way, you just need to pass a sufficient depth. A “guaranteed” flattening could just pass in… +Infinity or Number.POSITIVE_INFINITY, by the way.\n\nWait a second… This is not a verb!\nIndeed. The appropriate verb would be .flatten(). Unfortunately, as is too often the case, numerous websites still rely on MootTools (our usual suspect for this). It defined .flatten() on Array already, except with different semantics (it always flattens all the way), thereby preventing TC39 from using that nicer name. (If you ever wondered why we landed on .includes() instead of .contains() on strings and arrays, now you know.)\nA note about sparse arrays\nIt’s worth mentioning that missing cells in sparse arrays are ignored by flattening, which therefore never produces a sparse array. This is a neat way of “compacting” a sparse array.\n\nTransforming on-the-fly\n\nWe often find ourselves wanting to apply a tranform to arrays before flattening them; we could do a .map() first then a .flat(), but it would be a shame to do two traversals instead of one. We also need a neat solution for the common use-case when we .map() using a callback that produces an array we want to inline in the result, keeping it flat.\nWe can optimize all this with .flatMap(). It has the exact same signature as .map() (a callback accepting up to 3 arguments and, if that callback isn’t an arrow function, an optional this to be set inside the callback).\n\nBeware! This is not akin to first calling .flat(), then calling .map(): it is the very opposite! Our mapper gets all sources items directly, untouched, including nested arrays that are passed as arrays, not as individual items. This does serve well the use-case of mappers producing arrays that need to be inlined.\nThis is also why .flatMap() has no optional maximum depth argument: as our callback gets nested arrays as arrays, it can decide whether to .flatMap() recursively or not, depending on our needs.\nArrays or iterables?\nIt would be awesome to get this across all iterables (and iterators, for that matter), but we’ll have to wait until the Iterator helpers proposal clears standardization stage 4 for this. It just made stage 3. We’ll then get a ton of cool methods on all iterators, including .flat() and .flatMap().\nFor now, stick with arrays.\nWhere can I use that?\nWell, everywhere. You’re good to go with any modern browser, plus Node 11+ and Deno 1.0+. So go for it!\nProtips galore!\nWe got tons of articles, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "Pas besoin de bibliothèques tierces pour aplanir des tableaux en JavaScript, y compris lors d’opérations `.map()` renvoyant des tableaux !",
		"date": 1670371200,
		"image": "/assets/images/art-vid/art-js-protip-array-flat.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: Flattening nested arrays",
		"url": "https://delicious-insights.com/en/posts/js-protip-array-flat-flatmap/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": "Do you suffer from the foam fingers syndrome™ or type on the keyboard with mittens 🥊 on cold winter days? Then you will love that configuration option that corrects your Git commands on the fly.\nHow does it work?\nThe initial behavior is to get a message with the suggestion in our console.\n\nWe can also ask Git to automatically apply that suggestion, without validation. This is quite interesting but it could lead to usses when the suggestion is not quite on point.\n\nGit 2.34, published in November 2021, introduced a new option to let us confirm the suggested command:\n\nWe are now confident we can avoid any misinterpretation 😌!\nMore tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "Do you make typos when writing your commands? Would you like Git to correct automatically?",
		"date": 1670803200,
		"image": "/assets/images/art-vid/art-git-protip-autocorrect_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: autocorrect command typos",
		"url": "https://delicious-insights.com/en/posts/git-autocorrect/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": " -->\nTime for changes! We just rebranded our flagship training course and upgraded it to track state-of-the-art techs and practices, as we always do. Modern Web Apps thus becomes React PWA (which also works in French), which emphasizes its core tenets much better.\nWay more than just a rebranding\nWe took this opportunity to finally land a number of upgrades that have long been brewing:\n\nThe 5 hours or so in the first day that were about upgrading everyone to modern JS are gone, as they had been for over a year in our Node.js training. In the first half of 2022 we took the time to enrich that content and make it available as detailed articles and videos in our Idiomatic JS series. At the time of this writing, it’s not yet available in English, but we’re working on it! If you want to try your luck at the French content, here’s the article series and the video series.\nThis frees time to not only do more autonomous practice and challenges, but also explore PWA facets further.\n\nSo far we had implemented notifications, full offlining thanks to a Service Worker, installability with a Web App Manifest, and shortcuts for deeper OS integration of common in-app actions.\nWe’re adding Web Share (as a standard provider of shareable content) and explore opportunities for relevant use of more PWA-style APIs, such as Badging.\n\n\nA top-notch TypeScript variant is also available, on request, for in-house training.\nWe took a more formalized approach to “homework”, which are optional, self-paced, autonomous workshops for those of you who like to start over from scratch and refine their freshly-learnt skills every evening, outside of the training’s main app curriculum.\n\n👉🏻 Check out the full curriculum for the React PWA training 👈🏻\nVery soon we’ll also blend in more server-side API calls (we currently just do authentication), in order to highlight the various possible incremental data looading orchestration approaches, with a special emphasis on how the latest React Router does it, heavily inspired by Remix.\nAn origin story\nIn late 2011 and early 2012, we had launched short, 1-day trainings around vanilla JS and its standard library (“JS Puissant”) on the one hand, and modern front-end development on the other hand (“JS Guru”). By late 2012, it had become apparent that many clients indeed needed both sides and wanted to explore that space further: “360° JS” was born.\nBack then, a modern approach relied on Brunch, Backbone.js (hence jQuery and Undescore.js), Jade (now Pug), Stylus, Web Storage (via Lawnchair), Application Cache and a rather hands-dirty management of the DOM.\nStill, that took us pretty far: 100% client-side apps with persistence, usable and even launchable offline… over 10 years ago!\nWhat’s in a name?\nIn 2015 we rewrote that course entirely to track the state of the art of building rich, performant front-end apps. Goodbye Brunch, Backbone.js, jQuery, Underscore.js, Lawnchair, Jade, Stylus and Application Cache; hello Webpack, React, Redux, Material UI, Jest, SCSS and ServiceWorker! An ambitious rewrite.\nIn the meantime, we got more and more requests for a comprehensive course on JavaScript and its standard library. Something to look into every nook and cranny, every syntax, every built-in piece of functionality. As a matter of fact, the name “360° JS” was confusing to prospects who couldn’t be bothered to browse the curriculum, and thought that course was exactly that.\nWe knew that name was more fitting to a language-oriented course, but didn’t want to just transfer the existing name: that would have caused a great deal of confusion. We decided to name the new course “360° ES” and renamed the front-end one “Modern Web Apps”.\nYou don’t need to wait for your training with us\nWe’ve got tons of articles and videos with a lot more yet to come. Be sure not to miss any: subscribe on YouTube, follow us on Twitter or on Mastodon\n",
		"description": "Our flagship training course expands and changes name to better reflect its core tenets!",
		"date": 1671667200,
		"image": "/assets/images/art-vid/art-react-pwa.jpg",
		"title": "Modern Web Apps expands and becomes React PWA!",
		"url": "https://delicious-insights.com/en/posts/react-pwa/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": " -->\nSometimes we must force the push to partially rewrite the remote history with our local one (e.g. after a local rebase). That subject is pretty controversial. Some people say we should never change the remote history, and as always with “always or never” stances, I believe they’re wrong…\nThat strife usually comes from a partial understanding of what the push command can actually do. We can’t blame users for that: the command is poorly designed, with many option misnomers and dangerous defaults. Indeed, the easiest way is the most dangerous one: autocompletion starts with the infamous --force. 🤦‍♂️\nTo add insult to injury, most IDEs and editors offer a single approach to forced pushes, based on that very --force alone! 🤦‍♂️🤦‍♂️\nThis is a shame, considering there’s an alternate option that ensures we only override our project history if we fetched remote updates beforehand: --force-with-lease.\nNote however that, as another example of poor design, this variant alone isn’t safe enough: to also ensure that what we fetched was also applied to our local history, we must add yet another option: --force-if-includes.\nKnowing that, you should ban the single --force option as your default, unless you indeed want to erase what’s on the remote branch, regardless of third-party work there.\nBecause git push --force-with-lease --force-if-includes is hard to remember and to type, you likely want an alias. Here is mine:\n\n\n  You might also want to set --force-if-includes as the default behavior in your configuration\n  git config --global push.useForceIfIncludes true\n\nMore tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "Did you know that you could force the push without risking to erase your collegues work?",
		"date": 1674432000,
		"image": "/assets/images/art-vid/art-git-protip-push-with-lease_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: gently force push",
		"url": "https://delicious-insights.com/en/posts/git-push-with-lease/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": " -->\nIf you’re a command line user, you probably noticed that Git does nothing on the very first push of a branch. Instead, it prints a message that suggests to explicitly type the following command: git push --set-upstream origin &lt;branch-name&gt;.\n\nIt tells Git where to push your branch:\n\nwhat remote repository (yes, you can have multiple remote repos),\nwhat branch name.\n\nIt also tells Git to setup a (default) tracking between your local branch and the remote branch you’re pushing to (then you’ll be able to use the shorter git pull and git push later on, without specifying the remote repo or branch).\nThis is quite constraining, especially since 99% of the time we only have a single remote repository and use same-name branches.\nWhat if I told you that Git can now automatically manage the remote branch creation and tracking with a simple git push?\nThis is what the push.autoSetupRemote configuration option is for (available since Git 2.38, October 2022). You can set it up globally:\n\nThen:\n\nWant more tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "No more explicit branch tracking on push! Say hello to autoSetupRemote configuration!",
		"date": 1675036800,
		"image": "/assets/images/art-vid/art-git-protip-auto-setup-remote_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: automatically setup remote tracking on push",
		"url": "https://delicious-insights.com/en/posts/git-push-auto-setup-remote/",
		"locale": "en",
		"readingTime": "1 min"
	},	{
		"content": " -->\nWhether you’re in the terminal or in your editor, when you have to handle many conflicting files after a merge that went wrong, it can be a pain! You have to open each file one by one, then you have to validate your resolutions by adding the files to the stage. You might as well say that, apart from the fact that you don’t enjoy it, it’s tedious and you might forget to add files to the stage if you’re not careful.\nIdeally we’d like these files to be opened one after the other by Git, and once our resolutions are done (file modified, saved and closed), they would be automatically validated / added to the stage.\nGuess what? Git does have that feature! It’s the mergetool command. Instead of opening conflicting files yourself, you run this command and Git will open them for you! 🪄\n\nBefore you run off headlong into your terminal with this command, take the time to read on, as there are a few tricks and subtleties.\nFor starters, you need to define the default tool that you want to use. There is already a list of tools available on your computer and a list of suggested tools to install. To see all this, invoke the command with the --tool-help option:\n\nBy default, Git will look at the tools already installed and open the first one it finds. These tools are usually good enough, but you need to know how to use them (or exit them when it comes to vim 😅: ZZ or :q!). If you occasionally want to use one of the listed tools, then you can invoke the command with the --tool option:\n\nYou’ll probably want to define it once and for all as your default tool. This is what Git configuration is for:\n\nNote that if you want to use an editor that is not in the list, you have to set it up as a custom Git mergetool (what you see above in the user-defined list). This is done with the following command:\n\nThat’s it! Next time you have a conflict, you don’t have to go and open your files manually, you just have to run git mergetool.\n\nOnly after a git merge?\nNope! It works with any conflict situation! It can follow a merge, a rebase, when applying a stash or a switch --merge.\n\nWant more tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "Resolving conflicting files after a merge can be time consuming. Fortunately Git provides us with a tool to speed this up!",
		"date": 1675641600,
		"image": "/assets/images/art-vid/art-git-protip-mergetool_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: speed up conflicts management with mergetool",
		"url": "https://delicious-insights.com/en/posts/git-mergetool/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": " -->\nThis is 2023, promises and async / await are everywhere. And yet, we still use good ol’ callback-based setTimeout() and setInterval()… Ah well, these are old-timers (pun intended), so I guess we’re out of luck.\nAre we though?\nA long-awaited ugprade\nLet’s start with the backend side of things. Node.js has always offered the browser’s setTimeout() and setInterval() APIs: without them (or console) adoption would likely have been non-existent! 😅\nThese old APIs are, as you’d expect, callback-based; but Node 15 introduced a promise-based variant:\n\nSmokin’! 🤩\nWhen I use it like that, I tend to rename it:\n\nUnlike the sleep of old blocking runtimes, this doesn’t block the thread: these are promises, after all! await suspends, it doesn’t block.\nNot just setTimeout(), either…\nAnything from timers is covered, including setInterval() and setImmediate().\nAs for intervals, you probably wonder how things go, as it’s a recurring thing? Well, it returns an async iterable (that so happens to always produce the same value, which you may provide using the 2nd argument), something you can consume for instance with a for…await loop:\n\nIn practice, the clearInterval() is achieved by simply exiting the loop (here with a break), but what if you want to cancel the timer from another code location? Well, it uses the same decoupled cancellation mechanism you get for anything promise-based: AbortController and AbortSignal.\n\nYou can find the same kind of variant for event listeners(e.g. await once(stream, 'close') with node:events), filesystem access (e.g. await readdir(path) with node:fs/promises), or even readable streams (both classic Node streams or the ReadableStream Web API), the latter being async iterables aware of signals.\nIn short, await is the way! 😎\nWhere can I use that?\nAs for timers, since Node 15. Filesystem, readable streams and events had this since Node 10 (although importing from node:fs/promises only stabilized with Node 14, and signal awareness came with Node 15).\nWhat about browsers?\nOnly “recent” APIs are promise-based, but you can easily wrap setTimeout for promises:\n\nIf you really want to go all the way and handle signals, you can do that too (they’ve been supported on modern browsers for many years):\n\n(As for setInterval, it’s not much harder, but I’ll leave it, as they say, as an exercise to the reader 😉).\nProtips galore!\nWe got tons of articles, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "Callback-based `setTimeout()` is dead, long live promise-based `await` delays!",
		"date": 1675814400,
		"image": "/assets/images/art-vid/art-js-protip-timers-promises.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: Delaying with setTimeout(), but using await!",
		"url": "https://delicious-insights.com/en/posts/js-protip-timers-promises/",
		"locale": "en",
		"readingTime": "2 min"
	},	{
		"content": " -->\nThese days I keep stumbling onto people who seem to copy/paste this code snippet to produce an array of N copies of a unique value:\n\nThis happens to be needlessly verbose, but that doesn’t mean from() isn’t super useful, far from it! So how do we pick the right tool for the job, then?\nAlways the same value: short and sweet with only fill()\nJavaScript has always allowed initializing an array with a given length using Array(n). This is super concise, but no cell is actually defined, it’s kind of the ultimate sparse array: there are no cells!\nThis means operations like map() and friends won’t do anything, or will return their default, “empty dataset” value:\n\nThe established way of “filling in” such an array uses fill() (duh), like so:\n\nThis is spiffy when the value is always the same (e.g. false, 0 or the empty string ''), which does happen frequently in the real world, usually at the beginning of some processing.\nBut what if we want multiple values?\nMultiple values: let’s be smart about from()\nAt its core, Array.from() is about turning an iterable into an actual Array. It would usually consume a Map, a Set, codepoints from a String or perhaps a DOM NodeList, to list the most common scenarios.\nNow, when the argument isn’t iterable, it needs to be at least “array-like,” which means it should have a non-negative integer length property (and expectedly properties for indices from 0 to that), making it interoperable with most usual Array methods.\nFun fact: Array.from({ length: 5 }) is not quite the same thing as Array(5): the latter doesn’t define any cell, whilst the former set all cells at… undefined:\n\nThis is cool and all, but the real banger is its second, optional argument: the mapper, the function that will transform items on-the-fly.\nWe could use that to generate any random value, or perhaps base them on the position, since that mapper receives not only the value but the index, much like array iteration methods (e.g. map() and filter()).\n\nLet’s go all-out and implement a range() utility, kind of like Lodash’s (the end boundary is exclusive):\n\nSweeeet 😎\nTL;DR\n\nfrom() can transform the source to generate values on-the-fly\nfill() is more handy when using a unique value, and is shorter with Array(size)\nBoth appeared in ES2015 (so they’re everywhere except IE, which is irrelevant now anyway)\n\nReferences\nThe MDN’s interactive docs are as delightful as ever:\n\nArray.from()\nArray#fill()\nArray()\n\nProtips galore!\nWe got tons of articles and videos, with a lot more to come. Also check out our kick-ass training courses 🔥!\n",
		"description": "Array.fill() or Array#from()?!  It all depends on what you need, so here’s your guide to picking the right one every time.",
		"date": 1679443200,
		"image": "/assets/images/art-vid/art-js-protip-array-from-fill.jpg",
    "_tags": ["js","tutoriel"],
		"title": "JS protip: Array.from() vs. Array#fill()",
		"url": "https://delicious-insights.com/en/posts/js-protip-array-from-fill/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": " -->\n\nThat article is a complement to our existing article from 2013, the content of which is still relevant and describes some parts of our configuration.\nThe current article is more about how to setup the configuration, its benefits and drawbacks.\n\nKnowing how to configure Git is great! You can customize behaviors, create custom aliases, setup your favorite tools to work with Git, and customize even the colors of Git commands in the console. But configuring Git is also a constraint:\n\nGit default behavior is not optimized,\nyou must create configuration files,\nyou have to replicate that configuration on all computers that need to work with Git,\nand you’re seldom aware of the things that can or must be configured.\n\nBasically, it could be better…\nUnfortunately, Git is not at its best right out of the box! Some behaviors have to be optimized like the rebase mode for the pull command (rebase = merges, to be precise). You shouldn’t even be worrying about this and yet here you are, having to specify all that in the Git configuration.\nThere is a reason for that: new Git releases should not change its existing behavior. You shouldn’t see any difference in Git behavior following an update. This approach is rather questionable, but unless you take over the management of Git releases (which might be feasible, as it’s an open-source project), it’s not likely to change anytime soon.\nSo how can you fix that? It’s quite simple: you can attend our 360° Git training 😁! If you’re not that lucky, then you’ll have to read the docs follow our recommendations. Being the nice folks we are 🤗, we commented each configuration line, so that you have at least some idea of what it’s all about.\nSyntactically, it looks like an .ini or TOML file. You have a block with a header defined with square brackets, followed by lines in the key = value format.\n\nSometimes you will need to create named blocks to tweak features behaviors for specific commands (sometimes called drivers). Here is an example that customizes status command colors:\n\nWhere do I put all this?\nThere are several options, but the simplest and most effective one is to set the configuration globally for your user account. This will be used for all your projects on your computer. You’ll usually find a .gitconfig file in your home directory (if not, you can create one). If you’re not sure, you can access it from a commandl ine by typing git config --global --edit (be careful, it may open it in vim by default). Some graphical editors also let you edit this configuration, often within their own user interface.\nIf you are on Windows, be careful if user accounts are stored on a server: if the network goes down, your Git configuration will not be loaded anymore (it is read at each Git command execution). You won’t necessarily notice this because Git won’t report any errors. One solution would then be to change its location by changing the XDG_CONFIG_HOME environment variable.\nYou can also specify the “local” configuration, as close to the project as possible. This is the .git/config file of your project (yes, it’s not the same name 🤦). It’s useful for clarifying or overwriting the main configuration. For example, when I work on open-source projects, I use my personal email address rather than the professional one (see example farther below). That local configuration also contains the definitions of remote repositories, branches, etc. which are obviously distinct from one project to the next.\n\nAlso available at system-level\nWe don’t recommend configuring Git at the system level, but Git will also try to load the configuration from there (mingw32etc\\gitconfig or mingw64etc\\gitconfig on Windows, etc/gitconfig on Linux and MacOS).\n\nHow do I set up my configuration?\nAside from copying and pasting our configuration template, you may want to change some things. You can either edit the file by hand, or do it via the command line.\nDon’t be afraid of editing a configuration file by hand since Git’s parser will tell you where you have an error.\nIf you prefer a safer way, you can use the command line following the pattern:\n\nIf you remove the --global flag, it will change the local configuration.\n\nHow does Git load my configuration?\nThe configuration files are read at each Git command. First at the global level, then at the local / project level. The global configuration is then augmented or overwritten by the local definitions.\nFinally, depending on what you want to do, you can usually override some options directly from the command line.\n\nIn summary: the closest configuration to the command wins!\nHow about sharing?\nIf you use Git already, you may be familiar with the .gitignore file that you can put at your project’s root to share it. You might naively think that you can do the same with the configuration and create a .gitconfig file in the same location… Except you can’t! Do you really think it would have been that easy?\nBasically, configuration is not shared. You could use a few tricks, for e",
		"description": "Using Git optimally goes through tweaking its configuration!",
		"date": 1679875200,
		"image": "/assets/images/art-vid/art-git-config-part-2.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Configuring Git",
		"url": "https://delicious-insights.com/en/posts/git-config-part-2/",
		"locale": "en",
		"readingTime": "4 min"
	},	{
		"content": " -->\nWhen working with Git, you rarely want to version all the files in a project. Whether we’re talking about files related to the operating system, technical files related to an editor/IDE, sensitive files (security keys, configuration, etc.) or simply locally generated files (logs, temporary files, etc.), you want to avoid sharing them.\nGit has a mechanism to ignore them. An ignored file will not be added to the project and will not appear among the files listed by Git (see the git status command).\n\nSeveral ways to ignore files\nThe best way to do this is to create a .gitignore file at the root of the project.\nThis file must be versioned / added to the project so that its rules apply to all project contributors.\nThere are alternatives, but at Delicious Insights we think they are bad ideas:\n\nseveral .gitignore files in the project (one per directory, for example);\none global file for your user account, via configuration, and thus not shared through the repo;\nthe .git/info/exclude file, not shared either.\n\nLet’s dive a bit deeper into why we dislike these approaches.\n1. Multiple .gitignore files\nWhy create multiple files when you can have only one at the project root, which is easier to maintain?\n2. A global / user account file\nIt’s tempting to define global rules for our user account, applicable to all our projects (in addition to the local rules for each project), except that…\n\nthese rules will not be shared with coworkers;\nour coworkers might commit files that we would have ignored on our side;\n\nIf you really want to try this, you can look at the following command:\n\nYou’ve been warned 😉.\n3. The .git/info/exclude local file\nSame point as before: not shared with coworkers, so what’s the point?\nSyntax\nWe can use a time-honored, solid syntax: globs. You can specify specific file or directory paths, as well as patterns. You can even specify negations (what you don’t want to ignore), which is useful when you want to make exceptions to a more general pattern.\n\nMy files were already versioned\nYou may one day face the odd situation where you want to ignore files that are already versioned. Adding the rule to the .gitignore is not enough, your changes still appear, available to add 😨. You need to “remove” the file from version control. Of course, Git has a command for that:\n\nGit records the deletion of the file, but leaves it in the working directory (that’s the whole point of the --cached option). So you keep your file, but now it’s considered excluded (if you added the rule in the .gitignore).\nBe aware that the entire history of the file remains in Git. You just stop versioning it from now on. If you want to purge it entirely from the repo’s history, my advice is:\n\nask yourself whether it is really useful?\nIf so, then use a tool to rebuild the entire project history: filter-repo.\n\n\nMy advice\nIf you want to purge a critical file that should never have been versioned (like a private key, for example), keep in mind that it may have already been read by someone else.\nSo it won’t be enough to just purge it from the history, make sure you regenerate the keys and ignore the related files from now on (again with .gitignore).\n\nI want to add an ignored file\nYou have two options:\n\nEither you change the ignored patterns in the .gitignore file, possibly using a negation (as seen in the example syntax);\nOr you force the addition with a git add --force.\n\nAs I explained before, if a file is versioned, the .gitignore no longer impacts it. So by forcing the add once, you don’t have to worry about missing out on later changes.\n.gitignore templates\nYou can imagine that we generally use the same project bases, editors, operating systems. It follows that you will find the same patterns to ignore from one project to another.\nThere are ways to easily compile relevant ignore pattern lists. We explain it all in this complementary protip.\nWant more tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "How can we avoid adding unwanted files to Git?  And reciprocally, how can we add otherwise ignored files?",
		"date": 1681084800,
		"image": "/assets/images/art-vid/art-git-ignore_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Ignoring files with Git",
		"url": "https://delicious-insights.com/en/posts/git-ignore/",
		"locale": "en",
		"readingTime": "3 min"
	},	{
		"content": " -->\nYou may have read our tutorial which explains how to ignore (or not) files in Git. So, you started writing your .gitignore file at the root of your project, and you probably realized that there were plenty of things to add to it:\n\nfiles related to the operating system (.DS_Store files on Mac, Thumbs.db on Windows, .swp on linux…) ;\ntechnical files related to an IDE or the project context (.settings/ for Eclipse, .idea_modules/ for WebStorm/PhpStorm…).\n\nHow about we leverage existing templates?\nYou may have already seen, or even used, the templates provided by some platforms such as GitLab (at project initialization).\n\nBut it gets even better! Let me introduce you to the ultimate .gitignore generation tool: gitignore.io!\n\nIt doesn’t get any easier: just fill in the tools, languages, OS… people work with on your project, then click “Create”, and you get a text content that you can copy-paste into the .gitignore file for your project.\nIsn’t life great?\nWant more tips and tricks?\nWe’ve got a whole bunch of existing articles and more to come. Also check out our 🔥 killer Git training course: 360° Git!\n",
		"description": "The .gitignore file is great! But it can be tedious to fill in. What if we could do it all in one go?",
		"date": 1681084800,
		"image": "/assets/images/art-vid/art-git-protip-gitignore-io_en.jpg",
    "_tags": ["git","tutoriel"],
		"title": "Git protip: easy templates for `.gitignore`",
		"url": "https://delicious-insights.com/en/posts/git-protip-gitignore-io/",
		"locale": "en",
		"readingTime": "1 min"
	}]
