Planet HFOSS

May 01, 2019

GreenGazebo

Final Project

Astroangles is a game about learning to ‘guess’ angles just based on sight, this is very useful when approximating various calculations.

Presentation: https://github.com/jibby0/IGME582-final/blob/master/Astroangles.pdf

May 01, 2019 12:00 AM

April 22, 2019

effendian

HFOSS - Quiz 2

Quiz 2

HFOSS is not a lawyer, and this does not

April 22, 2019 01:30 PM

jibby

quiz2

Please select the most appropriate span of time for each of the following [4pts in total]:

1) The Bayh-Dole Act was passed, which gave rise to university technology transfer offices; the GNU Project and Free Software Foundation were founded; the US acceded to the international Berne Convention on copyrights; software was explicitly covered by US federal copyright law [ 1 pt]
  • c) 1980s
2) Creative Commons was founded. [1 pt]
  • e) 2000s
3) UNIX was published, Michael Hart started digitizing texts for what became Project Gutenburg, and Bill Gates wrote his “Open Letter to Hobbyists”. [1 pt]
  • b) 1970s
4) Linus Torvalds started writing the Linux kernel, the Digital Millenium Copyright Act was passed. [1 pt]
  • d) 1990s
5) Several elements are combined in different ways to form the various Creative Commons licenses.\
  • 5.1) NC _
  • 5.2) SA _
  • 5.3) ND _
  • 5.4) BY _
  • A) You must convey the same rights “downstream” that were conveyed to you by “upstream”.
  • B) You must attribute the contributions of the original or upstream creators of the work.
  • C) You may not use the work for commercial purposes.
  • D) You may not make changes to the work.
  • 5.1: C 5.2: A 5.3: D 5.4: B
6) The presence of which license elements make a license “non-free” in the eyes of the FSF? (give the letters, 1 pt each)
  • ND, NC
7) Which license element is a copyleft? (give the letter, 1 pt)
  • SA
8) Name two projects which distribute a body of non-software, free culture data, and briefly name or describe the kind of data. (1 pt XC each)
  • 8.1) Wikimedia: distributes educational content (everything from Wikipedia to Wikibooks).
  • 8.2) Freesound: Creative Commons licensed music and audio.
9) We discussed several concepts involving rights, restrictions, and licensing. Match the capital letter of the term on the left with lower-case letter of the most appropriate description on the right. (2 pts each)
9.1) trademark       
9.2) copyright       
9.3) patent          
9.4) trademark       
9.5) copyright       
9.6) patent          

a) 20 year term		       
b) lasts as long as used & defended
c) life of the author plus 70 years
d) arises as soon as a work takes tangible form		 
e) precedence is given to the first to file an application
f) protects consumers from confusing one product with another

9.1: B 9.2: C 9.3: A

9.4: F 9.5: D 9.6: E

by Josh Bicking at April 22, 2019 01:27 PM

pyrophone

Quiz 2

  1. c
  2. e
  3. b
  4. d
  5. Licenses
    1. c
    2. a
    3. d
    4. b
  6. ND and NC make a work non-free
  7. SA makes a work copyleft
  8. Projects
    1. Project Gutenburg - Free and open source e-books
    2. opengameart.org - Free and open source art assets, such as sprite, 3D models, and textures
  9. Rights, Restrictions, and Licensing
    1. f
    2. d
    3. a
    4. b
    5. c
    6. e

April 22, 2019 12:00 AM

April 16, 2019

pyrophone

Blog 12

This week, I pushed a small bug-fix to the Godot game engine again, and did some digging into the source code of a few Sugar activities.

Since my last bug fix, I wanted to contribute more to the community. After a few weeks of being busy, I finally got a bit of time, so I started search for some small bugs to fix. Eventually, I found a small issue with how one of the right click menu’s is displayed. After some searching in the code, I found the area creating the menu and the area where the bug itself was. As it turns out, it was actually a really simple fix, requiring me to move one line to of code to another area. After the fix, I created a PR and, after some short communication and testing, it was merged with master. The link to the PR itself can be found here.

On top of this, I’ve been diving more into the activities of sugar to figure out some things. In particular, I’m looking at the countries activity to see how the typing system itself is done and how we can extract that into our own game. I’m only a bit familiar with python, but from what I’ve been playing around with, it wouldn’t be too hard to adapt their code to our game. I’m starting to play around with turning the activity into a basic typing thing, and then changing it to restrict it’s input to numbers and a few symbols.

Also, here is the format-patch for adding this blog to the yaml file

April 16, 2019 12:00 AM

April 15, 2019

jibby

blog12/meetup3: Spacemacs

This past week at RITLug, I give a quick overview of my favorite editor and customizations. Of course, I can’t imagine a text editor that doesn’t include a Tetris clone, a psychotherapist, or a Tower of Hanoi player.

I’m talking about Emacs of course: undoubtedly, the most feature-rich text editor in existence. But, also, one of the least user-friendly editors.

Image result for text editor learning curves

Emacs is quite the rabbit hole. So many packages, extensions, and customizations have been built for it over the years. While VS Code and Atom advertise extension, Emacs is essentially a Lisp machine with a screen: all of its internals are exposed, meaning customization is all but limitless.

How much freedom is too much?

That status endless customization comes at a cost, however. There aren’t really rules when writing Emacs code: in its Lisp dialect (Elisp), all variables and functions are global, so anything can modify anything at any time. Often, when I attempted to customize my Emacs, there would be an odd variable or list element that I could never pin down (or, even worse, was dynamically created/modified through macros), and eventually just give up.

Freedom to customize is important, but when that customization isn’t formal, it can lead to impracticality.

Maintaining my own Emacs config

I tried keeping my own dotfiles for Emacs for a couple years.

I came from Vim, so I wanted Vim bindings, and I heard Emacs did that well. That was my first concern, and my first pain point.

Plenty of Emacs documentation still discusses Viper, a very old package that’s been blown out of the water by Evil. Both of these provide vi-style bindings, but Evil does it much better, and is actively maintained.

Old docs and snippets of Elisp will live forever, because Emacs will live forever, and will never break Elisp backwards compatibility, not even to make lexical scoping the default or add threading.

Ugh.

So, once you’ve discovered Evil is better than Viper, then comes the question of configuring it. Evil provides its own functions for adding keybinds to different evil states. States are what Evil calls its vi modes (normal, visual, insert, etc.) because Emacs already has a concept of major and minor modes running in a buffer.

The main function for this is evil-define-key. It asks for a map. What’s a map? A series of keybinds a mode uses. Or you can use 'global for all maps, according to the documentation. What does that tick mean? Ah, of course, that’s the name of a symbol (aka, a variable), rather than the value of the symbol itself.

Yikes. If this sounds like a lot, that’s because it is.

Coming from Vim, where batteries were included with every package, minimal configuration was needed, and documentation on a plugin was (generally) a page, this was a big paradigm shift. And I’m not alone in getting overwhelmed.

People go crazy with their Emacs configs. Hell, people build literate dotfiles in Org mode, then use its insane feature set to generate the actual config Emacs reads from that.

Many people create such complicated configs, they eventually throw them all out and start over. This practice is commont enough to have a name.

Why recommend Emacs then?

Despite the learning curve and insane amount of time it takes to configure, Emacs is still an incredibly powerful ecosystem, and integratable with any software system on the planet.

But, there are people way better at configuring it than I am. So good, in fact, they made configurable configurations.

A couple of these exist, namely Spacemacs. But I’ve heard good things about Doom Emacs.

There reaches a point where I want to do work, rather than messing with my editor. It’s fun, I learn a lot, and I can share what I create, but sometimes I just need autocomplete to work, or schoolwork just needs to get done.

Instead of working directly with packages and their configurations, Spacemacs abstracts these through layers. Layers are collections of packages with a standard configuration (such as Evil bindings, same defaults, hooks in the right place, etc.). Adding layers is simple, just add an entry to your .spacemacs file. Add rust for the Rust layer, php for the PHP layer, and auto-complete for autocomplete for both of them.

I swapped over to this several months ago: while I’ve had to adapt, it’s nice knowing I’m not the only maintainer of my config, and that I don’t have to be the only bugfixer either!

The presentation

So, in short, what did I change to show off at RITLug, and what could I do?

All I did was add layers the Spacemacs community had already built, and show them off. So, not only was this a powerful way to config, it could also be reproduced in a matter of minutes, without learning any Elisp.

Highlights were:

  • Magit (via the git layer), for visual staging and unstaging of work
  • GDB visualization, Spacemacs sets gdb-many-windows, making GDB debugging beautiful. It also manages window sizes when adding and removing windows, which helps a lot when GDB pulls up its 6 windows.
  • LaTeX rendering of math, in editor (via the latex layer)
  • Remote editing (and building, and so worth) with TRAMP

So, why Spacemacs?

Spacemacs isn’t perfect: you don’t get fine-grained tuning, like you do with vanilla Emacs. But, quite frankly, I don’t need it. I would rather accept some unfamiliar configurations for ease of use, and still hold all the power of Emacs and its plugins.

by Josh Bicking at April 15, 2019 06:59 PM

April 11, 2019

jibby

blog11: Python tooling

Oh dear.

I love Python. It’s wonderful for prototyping, and it simplifies much of high-level programming. There’s a package for everything, from data science to graphics to Chromecast communication. But I’d be lying if I said it was perfect: everything from Python 2.7 hell, to the GIL, to poor performance. And, of course, tooling.

Tooling and package management is a significant pain point of the language. It has evolved from root pip installs, to the rise of virtualenvs, to virtualenv management.

The terms get confusing, and new tools/methodologies are popping up all the time.

Pip

Pip is Python’s package manager. It pulls packages from PyPI, the Python Package Index (not to be confused with PyPy, a JIT compiler for Python).

Pip has some difficult jobs.

Packages sometimes require lower-level build tools to function properly. PyInstaller, for example, builds native executables out of Python code, meaning clients can use your Python program without installing an interpreter.

System packages (such as something installed with apt or dnf) conflict with root installs from Pip. As more of the world uses Python, this issue turns into a situation of two package managers on the same system: each has packages different dependencies, so how do they coexist? While Pip is limited only to the Python world, there’s plenty of opportunity for conflicts.

Virtualenvs

A virtualenv (a virtual environment) is essentially a folder in which a Python environment is isolated. Python packages and libraries are installed there. When Python is executed, it looks in this contained environment before searching elsewhere.

Creating or using a virtualenv usually just involves running a script to build, activate, or deactivate it.

Pipenv

Cool, so we can create little sandboxes for Python packages to live in. What about projects with requirements?

Pip uses a Requirements File, which is essentially a list of pip install commands. Versioning is done by running pip freeze > requirements.txt. Not necessarily the cleanest, but it gets the job done.

Pipenv bridges package management with virtual environments, accessing both of them with one file, and automatically keeping requirements up to date. Virtual environments are created and updated automatically. It uses a Pipfile format, rather than requirements, and provides means to convert an existing requirements.txt to a Pipfile. It tracks interdependencies (something pip doesn’t do on its own) in a Pipfile.lock file, specifiying particular versions for these interdependencies, and ensuring new installs don’t pull unexpected versions.

This “lock file” strategy is similar to the one NPM uses.

Pipenv also allows for different virtual environments within the same project: if a new series of dependencies is used in a “dev” environment, pipenv --dev will install those dependencies instead.

In short, I’ve started using this for most of my projects, and encouraged collaborators to do so as well. It’s simple to install and get running with, even if you aren’t familiar with virtualenvs, or most of Python’s packaging woes.

Anaconda

In the world of science though, there’s a different story.

Many linguists and data scientists are familiar with Anaconda, a bundle for Python tooling and resources. Along the included tools are Jupyter, Spyder, IPython, etc. It also provides Conda, which does several of the tasks Pipenv does. It attempts to make venv creation and package installation easy, either through command line tools or a graphical interface.

I haven’t used it too much personally, but interacted with it during an NLP class. The kicker is management of Python versions: easy, managed installation of various Python versions. This is especially helpful for running old 2.7 code, or other code that requires a specific Python version.

Pipenv doesn’t do this: another tool bridges this gap (Pyenv), but its installation and setup is much less intuitive, requiring the addition of environment variables onto your shell. Conda acts the same way venvs did: run an activate script, and you’re in.

anaconda-project is how most people interact with it on the command line. It’s fairly straightforward to create a project, add dependencies to its anaconda-project.yml file directly, or by running anaconda-project add-packages package1 package2.

It has the nice bonus of downloading files upon initialization: great in NLP, when your project requires a dataset, but you don’t want to distribute that giant file with your code.

Using this requires all of Anaconda, though. That means, be prepared to lose 3GB of space to Python tooling. Yikes.

Looking forward

Packaging is still a hot topic: Pipenv was named “the official packaging tool” by the Python Packaging Authority, a decision met with several cries of outrage.

Packaging is also a problem that’s difficult to fix retroactively: kudos to Go and Rust for thinking about it, alongside language development.

I don’t think python will go away just because of packaging woes, though. So looks like we’ll keep trying for a way to deal with them.

by Josh Bicking at April 11, 2019 02:03 AM

April 09, 2019

effendian

pyrophone

Blog 11

This week, for a class game jam, I explored the (Xenko)[https://xenko.com/] game engine. Xenko, formally known as Paradox, is an open-source game engine licensed under the MIT license and originally made by game development company Silicon Studios. Xenko has an interesting licensing history. Before version 3.0, Xenko actually had two licenses. The engine itself was licensed under the GNU GPLv3, while the editor itself was proprietary. This dual-license arrangement was mainly meant to be convient for different different use cases, like places where the GPL license would not work. This was ultimantly one of the important reasons for re-licensing the engine under the more permissive MIT license.

Xenko is available for Windows, but can as deploy for MacOS, Linux, Android, iOS, and XBox one. This is a little limiting to who can use the engine, especially since the main operating system of open-source software users is not available for development, and I think it’s probably one of the reasons Xenko is not as popular as engines like Godot.

Using Xenko was an interesting experience. At first glance, the engine functions very similar to Unity, featuring a similar layout with similar scripting capabilities. Like Unity, Xenko also uses C# scripting, meaning that the code is very similar to Unity as well. Overall, it wasn’t difficult to start developing for the engine, especially since they had a “Xenko for Unity Developers” tutorial.

Unfortunately, as we used the engine more, we discovered more problems. First, most of my team seemed to have trouble installing Xenko on our home PC’s. We repeatedly had to install and uninstall different packages until it eventually worked. On top of this, sometimes the smallest changes would cause the engine to break and not compile. We tried to change the resolution of our game, and it ended up causing the exe to crash every time we ran it. When we reverted those settings, the exe still crashed when we ran it. On top of all the errors and crashes we experienced, there documentation was not the best, and there was almost no community for the engine.

I think that if Xenko had a stronger community to contribute to it to add features and help make it more stable, it could be a create engine to use, and a powerful competitor to Unity. Unfortunately it’s instability and small community just make it too much of a hassle to deal with, so if you’re looking for a FOSS engine, I recommend just sticking with Godot.

April 09, 2019 12:00 AM

April 08, 2019

effendian

GreenGazebo

Quiz 2

1) c 2) e 3) b 4) d 5.1) c 5.2) a 5.3) d 5.4) b 6) NC, ND 7) SA 8.1) The soundtrack of Castle Crashers the game. 8.2) Cards Against Humanity the card game 9.1) b 9.2) c 9.3) a 9.4) f 9.5) d 9.6) e

April 08, 2019 12:00 AM

Final Project Proposal

Group members:

Josh Bicking - jhb2345@rit.edu Giovanni Aleman - ga9494@rit.edu Derek Erway - dje4179@rit.edu

For the app we’re going to create for the sugar environment, we decided on a sort of space themed typing game. By this I mean it will be a game similar to existing ones where you need to type letters or words in a certain time limit. In this case though, we will focus on the equivalent fractions section of the cirriculum. The player will have to either choose between different given fractions or possibly enter a certain number of equivalent fractions in order to go to the next level. Currently we don’t necessarily have defined roles as we all sort of plan to help out in each area of the project.

Repo for the project: https://github.com/jibby0/IGME582-final

April 08, 2019 12:00 AM

April 06, 2019

effendian

April 05, 2019

jibby

finalproposal: Frasteroid

For our final project, we’ve decided to build a fraction game in the “learn to type” style.

I remember playing typing games in 3rd-4th grade. Words (or some combination of letters) would appear on incoming objects, and you’d have to type that combination to destroy them before they hit you, crossed a line, etc.

The plan is a game in the same style, but the asteroids flying at your ship have fraction problems on them. You may need to convert them to mixed numbers, or maybe mixed numbers back to fractions. We could incorporate other fractional aspects too, such as addition.

Development will be done on GitHub: https://github.com/jibby0/IGME582-final

Our team is the same as it was for Commarch:

  • Josh (me)
    • IRC: jibby
    • email: jhb2345<at>rit<dot>edu
  • Derek
    • IRC: GreenGazebo
    • email: dje4179<at>rit<dot>edu
  • Geo
    • IRC: pyrophone
    • email: ga9494<at>rit<dot>edu

by Josh Bicking at April 05, 2019 02:50 PM

effendian

pyrophone

Final Proposal

For our final, we would like to make an asteroids / typing game. The game’s main will focus on teaching kids fractions and fraction equivalencies. The gameplay itself will involve asteroids with different fractions on them that require you to “fill in the blank” and type in an equivalency for them. By matching an equivalent fraction, the asteroid is destroyed. If an asteroid hits the planet, it will take damage. After enough damage, the planet is destroyed.

Our team consists of:

  • Giovanni Aleman
    • Email: ga9494@rit.edu
  • Josh Bicking
    • Email: jhb2345@rit.edu
  • Derek Erway
    • Email: dje4179@rit.edu

April 05, 2019 12:00 AM

April 03, 2019

effendian

TangyLime

The Rust Programming Language by Ben Goldberg

I recently went to a talk on Rust the other week that was hosted by RIT's own Linux User Group or Ritlug. The talk was given by Ben Goldberg, a fellow GCCIS student at RIT, and was an intro to programming in Rust and its possible uses.

The Content

Ben's presentation was split into two parts, a slide deck portion as well as some coding examples. Ben describes Rust as a language that provides both performance and control, and raved about the languages safety and compiler features. He went over syntax, typing, scope, and other topics as an introduction into the language and what it's like to use it. A particularly interesting part of Rust is that variables are immutable by default; the mut keyword is used to make a variable mutable.

The Talk

I really liked the talk and its structure. The atmosphere in the room felt pretty informal for a presentation, audience members would shout out questions to Ben or provide more context to some of his talking points. The dynamic nature of the presentation was really interesting and I hope that a lot of the presentations given at Ritlug are this interactive. Everyone one in the room seemed to be working together to paint a more detailed picture on the language, and it's a lot different when compared to other talks I've sat through.

The talk was great and informative, and I want to learn a lot more about Rust when I get some time to sit down and read through the Rust Book and documentation which were both recommended highly by Ben for those that want to jump in.

by Joshua Schenk at April 03, 2019 04:56 AM

nic-hartley

Set up a Pi with no router, no cable, and no peripherals

If you're at uni and you've tried to set up a Raspberry Pi, you've probably hit an error. Ditto if you don't own your router and your ISP (or landlord) won't give you access to the control page. That issue is that you somehow need to get your RPi's IP address, but you don't have any way to get it!

So instead of using your normal router, well... why not make your own? Sure, you could go out and buy one, but chances are you already have a device capable of generating a hotspot. Your laptop can probably do it, as can your phone. Why not use them?

Note: if you're already experienced with setting up RPis headlessly, you can probably skip the rest of the tutorial. That last paragraph was the big secret. The rest of this is just applying the normal techniques to this specific case.

Requires

  1. An RPi. I'll be using a Raspberry Pi 3 Model B+, but this should work with any Raspbian-capable machine. This should work on any Raspbian machine, though. Also, you will still need the normal mandatory accessories for your RPi -- a power cable, an SD card, etc.
  2. Any device which supports mobile hotspots. I'm using my Android phone for this tutorial, but the instructions will be similar regardless.
  3. An SSH client. I use the CLI packaged with the Windows Subsystem for Linux because it's convenient, but there are dozens, and literally any will work.

Instructions

Get your Pi ready

The first thing you need to do is get your Pi ready. There are a ton of tutorials on flashing Raspbian to your SD card; you can follow literally any of them.

Note: Do not install NOOBS! NOOBS is great if you have a monitor and keyboard to attach to your Pi. If you do, you should use it! This tutorial, however, is built on the premise that you don't, so make sure that you install Raspbian, not NOOBS.

Once you have Raspbian installed on the SD card, you'll need to add two files, both in the root directory of the SD card:

  1. An empty file called ssh
  2. wpa_supplicant.conf (see below)

Open up wpa_supplicant.conf in your favorite editor, and

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=US

network={
     ssid="TODO"
     psk="TODO"
     key_mgmt=WPA-PSK
}

Keep that text editor open. We'll be setting the values of ssid and psk by replacing the TODOs on those lines.

Note: If you're not in the US, you should change the country= line to match your country's ISO 3155-1 alpha-2 code. I have yet to see things break when the wrong country is supplied, but that doesn't mean they can't.

Setting up your router

The exact steps for this vary by the product you're using as your router. I'll describe the process for a couple of common systems, but there are definitely tutorials online for what you're using, even if it's not here.

In general, though:

  1. Create a hotspot.
  2. Set ssid to the SSID of the network.
  3. Set psk to the network's password.

Windows

A Windows 10 machine will require an internet connection or some fiddling to create a hotspot, but it gives you the same benefits: Open your WiFi menu and look in the bottom-right or bottom-middle for a button labeled "Mobile Hotspot". Click it. You now have a hotspot starting.

Once that turns blue, right-click it, click "Go to Settings"1, and find the section showing the network name and password. Back in wpa_supplicant.conf, set ssid to the value next to "Network name", and psk to the value next to "Network password". Make sure you get all of the characters exactly right, including case.

Android

Note: Most phones allow this; however, yours might not, or might have slightly different names for things. Use your best judgement.

Open Settings. Find the category labeled "Connections" or "Network & internet" and tap it. Tap the option labeled "Mobile Hotspot and Tethering", or something similar. Tap the switch next to the option labeled "Mobile Hotspot" or "Wi-Fi Hotspot". Before you connect to it, go into the settings and try to find something labeled "WiFi sharing" -- this will keep you from accidentally using mobile data while you have your RPi connected. With my phone, once I turn WiFi sharing on, I can actually turn WiFi entirely off (or at least disconnect from my local WiFi) and still use it, but this may not work with yours.

As before, you'll nede to set ssid and psk to the network name and password, respectively. Both should be visible on the main screen, but if you're having trouble finding them, try tapping the three-dot menu icon in the top corner, then "Configure Mobile Hotspot". Each of the fields will be labeled nicely.

You should also look at the Security dropdown. Chances are it'll be WPA2-PSK, and if that's the case, you don't need to do anything. If it's something else, you'll need to learn more about wpa_supplicant.conf to correctly configure your RPi.

Putting the two together

Now that you have your 'router' running and your RPi's SD card configured, put the SD card in the RPi and plug in the power cable. Give it some time to start up -- it'll take a little while.

Eventually, you should see the RPi pop up on the list of devices connected to your network. On a Windows 10 hotspot, you'll see the device's name, IP address, and MAC address in a table. On your phone, you might just see a name -- try tapping it or tapping-and-holding to bring up extra information about it.

You need to get your RPi's IP address for the next step. It should be clearly labeled. Be sure to copy it exactly; any typos will be annoying to spot.

Connecting to the Pi

This is actually the easiest step. Using the machine you'll control the RPi with, connect to the same network that the RPi is on. SSH into the IP address you just got, as the user pi with password raspberry. Congratulations! You're now in the RPi, and can do anything with it that you could with a normal terminal setup.

Remember to change the default username and password for your RPi, by the way! Even if you don't now plan on using it for anything that's accessible to others, it's better to get into the habit of changing default passwords than to learn the hard way that you forgot.

One important caveat, by the way: Connecting to a different network with either the RPi or your man machine will break the SSH connection. If you change the network your RPi is on, your connection will break before the other connection is established.

Next steps

From here, you have full control over your RPi. If your router connects to the internet, you can even install things from the internet. If you're brave, you could even set up X11 forwarding so you can graphically control your RPi. You can even walk away and leave the RPi to do its own thing for a while -- when you come back, it'll automatically reconnect to the WiFi in its wpa_supplicant.conf and you can access it like before.

It's trickier if you want it to connect to another network, though. You can, of course, just change wpa_supplicant.conf to point to the other network, reboot, and leave it be -- assuming you got all the details right, it'll connect to that network perfectly well. The trouble is when you want to get control over it.

If you have your RPi phoning home, you can of course write that system such that it can deliver commands like "reconnect to my hotspot" from your server. You could also have a script scanning the locally available WiFi and, if your hotspot is available, reconnecting to it. There are quite a few ways, and none of them are necessarily better than the others. It just depends on what you're trying to do, and what your resources are.

Good luck!

by Nic Hartley at April 03, 2019 12:46 AM

April 02, 2019

effendian

pyrophone

Blog 10

This week, we’ve mostly just been discussing different game ideas. We’ve been talking about different things but in particular, I thought the area of tha math curriculum focusing on fractions would be the most interesting to work with. It’s a little difficult coming up with non-cliche games revolving around fractions, as most of the easy ones to come up with usually involve pie charts, but I’d like to do something more interesting than that.

Interesting enough, while looking for resources and insight into the fourth grade curriculum, I stumbled upon this website. If you don’t know what the jumpstart franchinse of games are, they are pretty much just educational games developed for several different age groups to teach them curriculum. I’m sure most kids who grew up in the 90’s and early 2000’s already know this though. As it turns out though, jumpstart has a page with free resources for teachers to use that focus on different curriculums, in this case the fourth grade curriculum. While most of the resources are usually pen and paper activities, it still serves as a nice idea generator, as well as some nice insite into the specifics being taught.

The other thing I’ve been doing this week is working on my game engine project for class. For this project, we’ve been implementing a lot of the functionality using external libraries. One thing that I noticed while doig this is the amount of libraries that are open source. Some libraries we are using are boost, assimp, and stb. There are actually a whole lot of libraries for developing game engines, and we used these in particular for two reasons:

  1. They are open source, meaning we can use them in our project without penalty
  2. One of the goals of our project is cross platform support for both Linux and Windows

While it’s been interesting having developers on both, it’s been pretty successful so far, and we’ve created something that so far works well on both platforms. Unfortunately though, we had difficulty finding a good open source audio library. Currently, we’re using fmod, which is available to download and use with no cost up if your development budget is less than $500. The only decent alternative is OpenAL, which is, as of the latest versions, also proprietary. Of course, an open source implementation of OpenAL exists, but it lacks hardware acceleration.

As someone who does audio, it’s kind of sad to see almost no choice in oen source audio solutions, but at least there’s alternatives for the other areas of game engine development.

April 02, 2019 12:00 AM

GreenGazebo

PAX East 2019

This last weekend (plus some) I went to Boston to attend PAX East. It’s a convention for gaming companies to show off their newest games as well as quite a few indie companies as well. There was plenty of vendors selling tons of cool stuff from peripherals to collectable cards & board games. There was also a variety of talks throughout the 4 days that the convention runs. I went to quite a few of these including some talking about designing games and getting into working on games while being an intern, etc. Although it wasn’t the main topic of these meetings open source software and licensing came up fairly frequently. I mostly learned more about some licenses and what comes with taking advantage of open source software that’s out there. I had a great time overall and learned a lot from the talks. This was, I believe my 5th year going and I plan to continue.

April 02, 2019 12:00 AM

March 30, 2019

jibby

meetup2/blog10: FP talk for RITLug

The current theme of RITLug talks is programming languages. My specialty.

This talk was a little more impromptu, so no slide deck to link to. However, I went over a handful of functional programming techniques, mostly exemplified through Haskell.

This meeting was at the same time as the start of Datafest. So I kept it short, and attendance was low.

FP creep

Functional programming has kinda blended with several popular, industry-standard languages. C++, Java, and Python all have lambda functions. Python makes use of map(), filter(), and sort(), all of which use higher-order functions.

There’s no definitive, category for what makes a language functional. However, a language is generally considered a functional language if it lends itself well to the functional style, notably:

  • evaluating expressions instead of statements
  • minimizing stateful elements
  • using functions as data

Expressions

Expressions return values of some sort, once evaluated. This is different than a C-style int a = 3; or the sort: this doesn’t really “return” anything, it just changes the world in a certain way. In our case, it changes the value a to 3. Fairly straightforward.

A more functional language wouldn’t have something like that: instead, something would be returned, to use in another expression. Like, 2 + 2, which is added to a list, such as [1,2,3].append(2+2), which itself would return [1,2,3,4].

State

For an OOP-style approach to modifying the world, you build an object from a class (or other outline), and pass that around to various functions, or call methods on the object to change its state. A chair my have legs as an attribute, it may lose legs with a removeLeg() method, et al.

Functional programming is more math-rooted: same input, same output, like a mathematical function. Passing in a chair with 4 legs to a removeLeg() function would return a new chair with 3 legs: instead of modifying the world in-place, you recreate it to avoid mutability.

Functions as data

This whole concept of passing functions as arguments is where a lot of power comes from. Functions can decide filtering, provide a path from a to b, explain what to do when an error occurs, etc.

This is amplified with closures. Not only are you passing around instructions, but values or a scope that comes with them. It’s nearly like objects, with data and functions! But instead of inheritance, you’re mostly using typing, and minimizing state changes.

FP has been fun to play with for the past couple years, and resulted in a couple creative solutions to appearing programming problems. It’s not perfect (speed and GC reliance are just a couple downsides), but it certainly has its place in a handful of domains.

by Josh Bicking at March 30, 2019 09:19 PM

March 29, 2019

jibby

meetup1: ZFS talk for RITLug

Just before spring break, I gave a talk at RITlug on the best filesystem in the history of filesystems: ZFS.

While that might be an opinion, it’s certainly cool, and worth talking about! There’s a lot ZFS can do, and a reason it’s sweeping server environments.

I covered the history of ZFS, the features it provides, and some example commands to get started, and plenty of resources to check out! You can check it out on RITlug’s website.

TLDR

ZFS is a copy-on-write (COW) filesystem: every time it writes data, it writes it to a new place, rather than overwriting. The new location for the updated data is written, and all is well. While that sounds a bit wasteful, it means old data can be saved without any cost: just don’t throw away the old location when you write! ZFS tracks these old copies, called snapshots, and lets you jump back to them at a later date, or access any data they hold.

It’s great to take a snapshot (quickly!) before a system upgrade, and roll back if anything breaks.

ZFS has many more “modern” features: supporting huge amounts of data, verifying its integrity, compressing it before writing to disk, deduplication, and many others!

All talk

I enjoy giving talks. Not only can I (hopefully) help break down a complicated subject into bite-sized pieces, but presentation prep teaches me a lot about a subject! It’s also a chance to improve my public speaking and social skills. 🙂

by Josh Bicking at March 29, 2019 06:48 PM

March 27, 2019

GreenGazebo

Commarch Write-Up

Most of the answers to the questions were gotten through simply looking at the documentation of the project as Krita is fairly well documented overall. For questions that were somewhat opinion based we quickly discussed them and usually agreed on what answers we would put.

== I. The project’s IRC Channel ==

  • http://webchat.freenode.net/?&channels=krita
  • Also available at: https://krita.org/en/irc/

== II Source Code repository ==

  • Main Repository: https://phabricator.kde.org/source/krita/
  • GitHub mirror: https://github.com/KDE/krita

== III. Mail list archive ==

  • https://mail.kde.org/pipermail/kimageshop/. Note that the mailing list is under the previous name of krita, KImageShop.

== IV. Documentation ==

  • API documentation: https://api.kde.org/extragear-api/graphics-apidocs/krita/html/index.html
  • Documentation for users: https://docs.krita.org/en/
  • Bug tracking documentation: https://docs.krita.org/en/untranslatable_pages/reporting_bugs.html
  • Addtionally, Krita’s “Get Involved” page (https://krita.org/en/get-involved/overview/) provides links to other sections of documentation on how to contribute to various areas. Since Krita is a KDE project, most of the documentation is found in those pages instead of pages specific to Krita, such as KDE’s development documentation (https://community.kde.org/Get_Involved/development).

== V. Other communication channels ==

  • Another active form of communication for the Krita project is through the Krita forums: https://forum.kde.org/viewforum.php?f=136.
  • Krita is also active on many different social media platforms either through communities or a profile run by the KDE foundation. Platforms include: Reddit, Twitter, Google Plus, Deviant Art, Facebook, VK, and Mastadon.
  • There is also a Q&A forum: https://ask.krita.org/

== VI. Project Website and/or Blog ==

  • Website: https://krita.org/en/
  • News / Blog: https://krita.org/en/?post_type=post&s=

== A. Describe software project, its purpose and goals. ==

  • Krita aims to be a free and open-source digital painting and illustration application similar to Corel Painter. It intends to be a powerful professional tool that is supported by and supports open standards. Krita’s openess is also intended to make development influenced by users in order to “support their actual needs and workflow.”

== B. Give brief history of the project. When was the Initial Commit? The latest commit? ==

  • Krita’s origins start in 1998, with Mattias Ettrich building a Qt GUI around GIMP as a showcase of the ease of development. Eventually, the KDE project started to develop their own image editor, similar to photoshop or GIMP, called KImageShop, with the first commit being pushed on June 8th, 1999. From 2004-2009, the project was focused on image editing, but after 2009, the focus shifted to digital painting. Since Krita has very active development, commits are constantly being merged, meaning the repository is always up to date.

== C. Who approves patches? How many people? ==

  • Krita developers approve patches on KDE’s Phabricator. Anyone who submits 3 patches is eligible for developer status.

== D. Who has commit access, or has had patches accepted? How many total? ==

  • https://phabricator.kde.org/project/members/8/ lists 27 members: however, it’s difficult to tell who has any special privileges. It lists woltherav (Wolthera van Hövell) as a Krita Dev/Manual Writer, rather than just a User. However, rempt (Boudewijn Rempt) and dkazakov (Dmitry Kazakov) are both Users, but have the most knowledge of the project, according to Git by a Bus.

== E. Who has the highest amounts of “Unique Knowledge?” (As per your “Git-by-a-bus” report. If there is a tie, list each contributor, with links if possible) ==

  • rempt (Boudewijn Rempt) and dkazakov (Dmitry Kazakov)

== F. What is your project’s “Calloway Coefficient of Fail?” ==

  • 20 points. This is because it’s a large project, and it uses cmake.

== G. Has there been any turnover in the Core Team? (i.e. has the same top 20% of contributors stayed the same over time? If not, how has it changed?) ==

  • Repo history goes back to 1998. Kazakov started contributing in 2009, Rempt in 2003, and Zander in 2005. The 3 of them make up ~60% of the knowledge in the codebase.

== H. Does the project have a BDFL, or Lead Developer? (BDFL == Benevolent Dictator for Life) If not, what is the structure of its leadership, and how is it chosen? ==

  • I can’t find any information on conflict resolution. The KDE Code of Conduct https://kde.org/code-of-conduct/ pushes pragmatism and respect. Ideally, developers respectfully hash it out, and the best idea wins. Anyone who doesn’t follow this process is in violation of the CoC.

== I. Are the front and back end developers the same people? What is the proportion of each? ==

  • Krita developers are full stack. As explained on https://krita.org/en/get-involved/developers/, “To work on Krita, you have to use C++ and Qt. It’s a good way to learn both, actually!”

== J. What have been some of the major bugs/problems/issues that have arisen during development? Who is responsible for quality control and bug repair? ==

  • Searching on https://bugs.kde.org with “product:krita severity:critical” only produced one big bug, an undo/redo bug that causes data loss: https://bugs.kde.org/show_bug.cgi?id=397836 This was opened last August and confirmed, but so far has not been addressed.

== K. How is the project’s participation trending and why? ==

  • Looking at most of the recent submissions to the project, there is still a fairly good amount of people contributing almost everyday. == L. In your opinion, does the project pass “The Raptor Test?” (i.e. Would the project survive if the BDFL, or most active contributor were eaten by a Velociraptor?) Why or why not? ==
  • Based on our results from GBAB, I feel like this project would survive, even though the top contributor’s risk is about 30% which is a large chunk, I don’t think the project would die. == M. In your opinion, would the project survive if the core team, or most active 20% of contributors, were hit by a bus? Why or why not? ==
  • Again, based on the results from GBAB, I feel like the project would most likely not survive the top 20% of contributors were to leave, just on the graph that shows the top 25, if the top 5 were to leave that would account for around 75% of the contributions. == N. Does the project have an official “on-boarding” process in place? (new contributor guides, quickstarts, communication leads who focus specifically on newbies, etc…) ==
  • Yes, on the Krita website there is a page specifically for getting involved, explaining that a good place to start are some of the ‘Junior Jobs’, as well as pointing new contributors towards helpful resources and a basic overview of the project. == O. Does the project have Documentation available? How extensive is it? Does it include code examples? ==
  • As far as use of the program, the documentation is very extensive and includes many tutorials and explanation of concepts in the program. Development documentation also has a very extensive guide complete with code examples. == P. If you were going to contribute to this project, but ran into trouble or hit blockers, who would you contact, and how? ==
  • I would probably try to contact someone on the contributors manual page, they list 7 people that are almost always around on the IRC channel to help out. == Q. Based on these answers, how would you describe the decision making structure/process of this group? Is it hierarchical, consensus building, ruled by a small group, barely contained chaos, or ruled by a single or pair of individuals? ==
  • From what I’ve seen it seems that the small group that contributes the most ends up making most decisions but also looks for ideas from the rest of the community. == R. Is this the kind of structure you would enjoy working in? Why, or why not? ==
  • It seems like it would be a project we would enjoy working on, it is a well documented project with many members willing to help new contributors.

March 27, 2019 12:00 AM

March 26, 2019

jibby

blog09/commarchreport: Krita

KDE has been going strong for more than 20 years: it was no surprise their development practices were so fluid.

Even so, there were a few surprising things in our analysis. You can find the full report at our Github repo: https://github.com/jibby0/commarch-krita

Author Knowledge

Running Git by a Bus v2 showed that two main contributors hold a majority of codebase knowledge. This is certainly better than one, and both of them have been around a while (10-15 years).

Together, they hold about 60% of the knowledge. However, the rest of it seems pretty evenly distributed! I know KDE is inclusive and has streamlined their developer onboarding process.

All in all, not terrible. Should the both of them fall off the face of the earth, hopefully the other 40% could manage.

Patch submission process

The process for creating and submitting a patch was documented, but I couldn’t find any information on who exactly could approve them. I think anyone with the “dev” status can, which you achieve after 3 patches.

Still, not having a public, formal process, isn’t the greatest. Or maybe I just can’t find it.

Callaway Coefficient of Fail

The only places Krita failed in this test were project size and build tools. Krita is (understandably) huge: it relies heavily on Qt, and has plenty of features, such as supporting both bitmap and vector images. It makes sense that the repo would be large, compressed or uncompressed. I believe this is a factor that should be either removed entirely, or at least updated to modern standards.

Krita uses CMake: it lost points for not using GNU Make. However, CMake works a lot better with C++ projects, and writing configs is much less of a pain than writing Makefiles. Since it’s open source and a popular standard, I see no problem with using an alternative build tool.

Other than that.. great!

There were only a few outstanding issues with Krita’s development process, and none of them seemed detrimental. I’ve thought about contributing to the KDE project: mostly their desktop environment, as I use it daily. The modularity is wonderful, and I would love to see it improve.

Since nearly all KDE projects follow a similar development flow, it’s definitely pushed me in a positive direction.

by Josh Bicking at March 26, 2019 08:13 PM

effendian

pyrophone

Commarch Report

For our commarch assignment, my team analyzed the Krita project. Krita is an open source digital painting application made by the KDE foundation. With origins traced back to 1998, Krita has been in development for a long time, with it’s focus changing through development as well.

Krita as a project is overall pretty strong, with a great community supporting the project. Two people make up the main contributers to the project, with about 60% of the contribution, and the other 40% being pretty evenly distributed amoung other contributers. The process for submitting patches is describe in KDEs development documentation, along with Kritas Get Involved. The process seems pretty simple, requiring you to submit patches to be reviewed for at least the first three commits. Afterwards, you are given developer status and can submit patches directly to the main repository.

Krita seems like a pretty stable project, according to Callaway’s coefficient of fail, losing a few points for being too large and for not directly using GNU make. As an image editing application, it’s hard for Krita to be compressed to such a small size, so it makes sense for it to be vary large, and I’d argue that it’s okay for this project. As for not using GNU make, Krita uses CMake instead of GNU make, which I’d say makes more sense for a large project like Krita. Of course, you could argue that CMake still counts as GNU make, as it just automates the process of creating make files.

Overall, Krita is a healthy project that I think will definitely be around for a long time. With how open it is and how many people are contributing, it is pretty clear that the project is triving. The project even generates revenue, having a professional version that is sold on digital marketplaces such as steam.

Here is a link to the full analysis of Krita.

March 26, 2019 12:00 AM

March 22, 2019

effendian

March 20, 2019

effendian

Literature Review #3: NYS Next Generation Mathematics Learning Standards

Review: NYS Next Generation Mathematics Learning Standards

In this literature review, HFOSS students were asked to review the education standards for New York State’s mathematics curriculae. This is in preparation for our final project.

March 20, 2019 07:32 AM

March 19, 2019

pyrophone

Lit Review 3

In this post, I’ll be reviewing this, document about the NYS math curriculum. Specifically, I’ll be looking at the fourth grade math section (pages 55-66).

The Gist

The document is an explanation of the NYS math curriculum for fourth graders. It explains what the students should be learning, how they should be applying it, and the expected outcomes.

Review

Overall, I thought I thought the document was informative, though unsurprisingly dry. It explained pretty throughly what is expected to be taught and what students are expected to know. It even goes pretty in depth with examples of many of the topics themselves. While playing around with sugar, I can see where some of the ideas for some of the games come frame, in regards to fitting in with the curriculum. In particular, the visual match activity, when played on advanced, contains a lot of matches that involve multiplication of numbers. With this activity, you can match already multiplied values, groups of values to multiply, or tally-marked representations of numbers. Of course, I do think there could be more programs targeting the math section better. For example, the memorize activity could have a multiplication or fraction game premade in it instead of relying on the user toc create it. The document also seems to have a very large focus on understanding word problems / translating math vocabulary to writing and back.

The Good

  1. The curriculum is layed out really well and really detailed.
  2. The examples provided for each area makes sense an easily illustrates what the text means.
  3. Since the topics cover a pretty large area, it would be easy to pick a few segments and make some sort of game that addresses those areas.

The Bad

  1. While informative, this document has a lot to it. There is so much it discusses and talks about, it’s hard to remember everything in it.
  2. There are a few areas missing examples. While not critical, I am a visual person who likes examples, so describing the math you want someone to know without providing examples makes it hard to understand sometimes.
  3. The document is, unsurprisingly, very dense and dull. Considering this is a document describing curriculum, this is kind of expected.

Questions

  1. I feel like there isn’t much discussion on geometry. Why put in some of those geomtric concepts if the focus is on understanding multiplication, fractions, etc?
  2. There’s a lot of basic logic that is being taught at this level it seems. Could that be a good foundation for teaching computing concepts in the next year or so?
  3. This curriculum is from 2017. Have there been any updates to it since?

March 19, 2019 12:00 AM

Blog 08

This week was spring break, so I haven’t really done a whole lot. I’ve been working on some projects in Godot over break, and recently installed OpenSUSE on a new hard drive that I got to replace my failing one.

Just a bit ago, Godot version 3.1 was released, which includes my small fix. It also includes a bunch of fixed and improvements. So many small quirks and issues I encountered in the previous version have now been fixed, and it works really really. I think that if you were worried about using Godot cause it would not work as well as Unity, then now would be a great time to try it because, in my opinion, it is just as powerful for most types of games. I also just recently found a small error in the Godot documentation and pushed a fix for it. It’s just a small error and a small fix, but I figured if I found it someone else might to, and since it hasn’t been fixed yet, I might as well do it.

As previously stated, I also spent time installing OpenSUSE again. If you don’t know, OpenSUSE is a Linux distribution supported by the German company SUSE and based on their SUSE Linux distributions. OpenSUSE is a pretty well put together operating system, and installing it is a super simple process. It wasn’t difficult to set anything up, just connecting to the wifi, typing in the account name and password, making sure the timezone is right, and then waiting. It’s a pretty easy distro to install if you are interested in Linux but don’t really know where to start.

The reason I chose OpenSUSE in particular is because of two things: The package manager and the way rolling release is handled. For its package manager, OpenSUSE uses ZYpp package. ZYpp is very convenient, providing a lot of super easy shortcuts a options for managing packages, as well as proving some really powerful search options, such as searching for packages but what they provide. Finally, ZYpp auto installs dependencies for software you need, which is just really convenient. OpenSUSE also has a version that is called “rolling release.” This means that the operating system’s packages are updated fairly quickly as new versions are released. This usually means a fair amount of stability is lost in order to have the most up to date packages. However, OpenSUSE, has an interesting process involving testing the software before releasing it, allowing for more stability in it’s rolling release. It’s great because it makes allows the OS to be more stable, while still having very up to date software. Rolling release also means I don’t have to perform large updates that involve switching repos, instead just always running the update command. This is just convenient for me, as I probably wouldn’t be able to remember to switch repos and perform a full update…

March 19, 2019 12:00 AM

GreenGazebo

Lit Review 3 - Possible Final?

This week I read the 4th grade section of the Mathematics Learning Standards for New York, for 2017. http://www.nysed.gov/common/nysed/files/programs/curriculum-instruction/nys-next-generation-mathematics-p-12-standards.pdf

In preparation for the final exam, I focused on looking for ideas for possible apps that would fit inside the given criteria. The 4th grade ciriculum boils down to really only 3 categories. Those categories are; Number and operations in Base Ten, Numbers and Operations - Fractions, and Geometry pertaining to two dimensional shapes. When reading just the first page I immediately thought of some sort of geometry game where you are given multiple shapes and have to identify which has ‘X’ amount of lines of symetry or something along to lines of what kind of angle is ‘35 degrees’ for example. So that is one possible idea for an app. Another, I’m sure isn’t that hard to do but might be a little boring, is something to do with the operations in base ten, which includes mostly and introduction to division and understanding the value of numbers in different places. This, while useful might be a bit dull to create so I am shying away from that one. Fractions on the other hand, while similar, could pretty easily be turned into a matching game as one of the points they mention is understanding some fractions such as 3/4 and 9/12 are equal.

March 19, 2019 12:00 AM

March 18, 2019

jibby

blog08/litreview3: NY’s 4th Grade Math, and a Sugar Project

This litreview was New York State Next Generation Mathematics Learning Standards 2017, specifically the Grade 4 Math overview. As I looked at the requirements for the curriculum, I started thinking about potential games that could aid the curriculum. The standards cover a few different areas, each of which could be assisted with a sugar activity.

Geometry

Looking through shape identification and angles, the first thing that came to mind was a Physics-like activity. Something that wasn’t a sandbox, but instead required a drawn shape to meet certain criteria. I’m not sure how that would implement angle measurement or angle calculation, though. That sounds tricky to gamify.

Fractions

I remember seeing a fractions game on display for FOSS@RIT, at the Rochester (then Mini) Maker Faire. I think it was PyCut. In looking that up, I found a repo of RIT games and projects. I wonder if I could use those for inspiration

A fraction game sounds fun. Either showing equality between fractions, or building up “pieces” of a whole to equal another fraction.

But it’d be hard to top Frog Fractions.

Multiplication

When I was looking at that list, it reminded me of a math puzzle game I would play when I was younger. Maybe an “adventure” game would be interesting. Different mazes to go through, with math “roadblocks”, or something. That would make doing simple math problems entertaining. Or, it entertained me at that age, at least.

That game involved a big maze you would navigate, trying to reach the end. It had different difficulty levels, and blocks along the way, where you had to do math problems. I remember most of them being multiplication problems. I enjoyed it, but looking back, it was pretty simple. Maybe I could spice it up? Power ups (skip a problem, get help with a problem, etc.), catchy music, interesting blocks in the way, and so forth.

Lots to think about!

This leaves a lot of wiggle room, and a few questions. I’m not sure what I’d like to do, but knowing the target goals and audience is helpful. It’s also nice to have those examples. Maybe playing around a bit more with Sugar, or digging up those old games I enjoyed, will help too.

by Josh Bicking at March 18, 2019 09:49 PM

March 08, 2019

effendian

jibby

quiz1

1) Please expand each of the following acronyms (1 pt each):

1.1) IRC: Internet Relay Chat
1.2) FOSS: Free and Open Source Software
1.3) OLPC: One Laptop Per Child
1.4) PR: Pull Request

(Please use the expansion most appropriate to the class.)

Bonus: Give the expansion for the acronym GNU. (1 pt)

GNU's Not Unix

2) What is the name of the version control system we use in this course? (1 pt)

Git

Bonus: Give the name for another version control system. (1 pt)

Subversion

3) Please give the one-word name for the interface used in the OLPC computers & our VMs? (1 pt)

Sugar

4) Bonus: What is the short, two-letter name for the OLPC computers for which this desktop software was first developed? (1 pt)

XO

5) We refer to sites that host source code as “forges”. What is the name of the primary forge used in this course? (1 pt)

Github

6) Bonus: Name the other forge we have used? (1 pt)

Gitlab

7) Bonus: Name another forge, one we have not used for this course. (1 pt)

Sourceforge

Multiple choice

8) The GitHub-specific term to describe the process in which, starting from one repository hosted at GitHub, one creates another repository, also hosted at GitHub, but under the control of a different user account. (1 pt)

a) repository b) branch c) remote d) fork e) clone

d) fork

9) A collection of related commit objects (1 pt)

a) repository b) branch c) remote d) fork e) clone

a) repository

10) A separate, but related, repository from which one may fetch or pull changes into one’s own working copy, and to which one possibly has permission to push changes. (1 pt)

a) repository b) branch c) remote d) fork e) clone

c) remote

11) The general term in git for making an exact, working copy of another repository in which changes can be tracked separately between the two versions. (1 pt)

a) repository b) branch c) remote d) fork e) clone

e) clone

12) A namespace in which one can track changes to a set of files within a given repository. This term applies both to the action and to the result of the action. Comparisons (‘diffs’ or patches) can be made between different such namespaces. (1 pt)

a) repository b) branch c) remote d) fork e) clone

b) branch

13) Consider the following (+1 for each correct, -1 for each incorrect):

a) e59b627
b) 451.867
c) dca_079
d) 9539807
e) DB6A60A
f) 614@1d4
g) be34fb47c60d


Looking just at the string of non-space characters to the right of the close-parenthesis …

List which of these could be a valid commit identifier?

a,d,e,g

14) We’ve discussed “the four R’s” as a shorthand for the freedoms attached to software for it to be considered “free” or “open source”.

List or describe each. (eg, if you can remember the “r” word you can just give that. If you cannot remember the term, but can describe the freedom involved, that also counts).

Various “r” words are roughly synonymous for some of the freedoms, but we’re counting freedoms here, not synonyms so if you give two (or more) terms for the same freedom, it only counts once. For the purposes of this quiz, “remix” does not count as describing any of them. (2 pt each)

10.1) Read
10.2) Run
10.3) Redistribute
10.4) Repurpose

by Josh Bicking at March 08, 2019 02:20 PM

pyrophone

Quiz 1

  1. Please expand each of the following acronyms:
    1. IRC - Internet Relay Chat
    2. FOSS - Free and Open Source Software
    3. OLPC - One Laptop Per Child
    4. PR - Pull Request
      • Bonus: GNU - GNU’s Not Unix!
  2. What is the name of the version control system we use in this course?
    • git
    • Bonus: Bitkeeper
  3. Please give the one-word name for the interface used in the OLPC computers & our VMs?
    • sugar
  4. Bonus: What is the short, two-letter name for the OLPC computers for which this desktop software was first developed?
    • xo
  5. We refer to sites that host source code as “forges”. What is the name of the primary forge used in this course?
    • GitHub
  6. Bonus: Name the other forge we have used?
    • GitLab
  7. Bonus: Name another forge, one we have not used for this course.
    • Bitbucket
  8. The GitHub-specific term to describe the process in which, starting from one repository hosted at GitHub, one creates another repository, also hosted at GitHub, but under the control of a different user account.
    • d) fork
  9. A collection of related commit objects
    • a) repository
  10. A separate, but related, repository from which one may fetch or pull changes into one’s own working copy, and to which one possibly has permission to push changes.
    • c) remote
  11. The general term in git for making an exact, working copy of another repository in which changes can be tracked separately between the two versions.
    • e) clone
  12. A namespace in which one can track changes to a set of files within a given repository. This term applies both to the action and to the result of the action. Comparisons (‘diffs’ or patches) can be made between different such namespaces. (1 pt)
    • b) branch
  13. Consider the following (+1 for each correct, -1 for each incorrect): a. e59b627 b. 451.867 c. dca_079 d. 9539807 e. DB6A60A f. 614@1d4 g. be34fb47c60d

    Looking just at the string of non-space characters to the right of the close-parenthesis …

    List which of these could be a valid commit identifier?

    • a
    • d
    • g
  14. We’ve discussed “the four R’s” as a shorthand for the freedoms attached to software for it to be considered “free” or “open source”.
    1. Run
    2. Read
    3. Repair
    4. Redistribute

March 08, 2019 12:00 AM

GreenGazebo

Quiz 1

IRC = Internet Relay Chat FOSS = Free & Open Source Software OLPC = One Laptop Per Child PR = Pull Request GNU = GNU’s Not Unix

  1. We use a distributed version control system in class. Another type of version control is a centralized version control system.
  2. Sugar
  3. XO
  4. Github
  5. GitLab
  6. Bitbucket
  7. D - Fork
  8. A - Repository
  9. C - Remote
  10. E - Clone
  11. B - Branch
  12. a, d, e, g
  13. Read, Run, Repair, Redistribute

March 08, 2019 12:00 AM

March 07, 2019

jibby

teamproposal: Analysis of Krita

Our team has decided to analyze Krita, an image manipulation tool for Linux. Krita handles raster (and recently vector) graphics, and targets digital artists in the tools it provides. This program falls under the (large) umbrella of the KDE project.

Krita’s source is hosted on the KDE Project’s Phabricator server.

Our team will consist of:

  • Josh (me)
    • IRC: jibby
    • email: jhb2345<at>rit<dot>edu
    • role: Communication and Structure
  • Derek
    • IRC: GreenGazebo
    • email: dje4179<at>rit<dot>edu
    • role: Documentation and Licensing
  • Geo
    • IRC: pyrophone
    • email: ga9494<at>rit<dot>edu
    • role: Coding and Contribution

by Josh Bicking at March 07, 2019 09:34 PM

pyrophone

Team Proposal

For our proposal, our team chose to analyze Krita. Krita is a free and open source image editing application made by the KDE foundation. It is similar to Adobe’s photoshop software and is focused on digital painting and art.

Krita’s source can be found on their phabricator page.

Our team consists of:

  • Giovanni Aleman
    • Email: ga9494@rit.edu
    • Role: Coding and Contribution
  • Josh Bicking
    • Email: jhb2345@rit.edu
    • Role: Communication and Structure
  • Derek Erway
    • Email: dje4179@rit.edu
    • Role: Documentation and Licensing

March 07, 2019 12:00 AM

GreenGazebo

Commarch Proposal

My group for commarch:

Josh Bicking - jhb2345@rit.edu Giovanni Aleman - ga9494@rit.edu Derek Erway - dje4179@rit.edu

We will be analyzing Krita for this project. Krita is a free and open source painting program. It is designed to provide a service similar to microsoft paint, paint.net, photoshop, etc. As far as roles go, I will be handling anything to do with the licensing and documentation, Josh will be handling everything related to communication and structure for the project and, Gio will be handling things related to the coding and contribution to the project. In addition to this all of us will help with tasks that don’t fall into our categories if needed.

Repo for the Project: https://phabricator.kde.org/source/krita/

Github mirror: https://github.com/KDE/krita

March 07, 2019 12:00 AM

March 05, 2019

pyrophone

Blog 07

This week, I contributed back to an open source project that I use. Specifically, I fixed a bug in the Godot game engine. As my last blog mentioned, I’ve been using Godot for a while to develop projects, and I’m starting to really focus on developing projects that I can eventually sell. Contributing back to Godot was not only something that I felt I should do, but it’s something I’ve wanted to do for a while and have never really known where to start.

Eventually, after scrounging the GitHub issues for bugs marked as “junior-jobs,” I found a bug that seemed simple enough to fix. The bug itself was a small issue with how a texture displayed in one of the menu’s. After reading godot’s documentation about fixing and submitting bugs, I set up the environment on my laptop and started looking over the repository. From the documentation, there wasn’t really a formal process for anything but submitting pull requests. I didn’t need to be assigned to anything I was trying to fix, all I had to do was create a pull request for it with a link to the issue it intended to fix.

It didn’t take me too long to figure out the structure of the project itself, as I’m pretty familar with both game engine development and Godot itself, so I quickly found the file I needed to look at for the bug. After a bit of search in the file, I found the bug itself. I quickly wrote a patch for it and submitted it, before eventually realizing there were some edge cases that need to be fixed. There’s some discussion between me and one of the main developers about the issue and amending the commit on my pull request for the issue. Eventually, the pull request itself was accepted and merged into master.

It was quite an interesting process overall, and now that I’m pretty familiar with both the overall process of patching godot and the structure of the project, I think I’ll try to contribute more fixes to the project in the future.

Here’s link to the pull request itself with our discussion

March 05, 2019 12:00 AM

March 03, 2019

jibby

blog07: RSS and selfoss

About a year ago, I realized just how sick I was of social media-esque news aggregators. By that I mean Reddit and Hackernews. They were huge timewasters, and their navigation style was a pain. I see news reading as a linear, email-style task: view the articles in front of me, read or discard them, and we’re done for the day.

Plus, the comments weren’t always… the most intelligent.

RSS is an interesting technology: most browsers ship with support for it (until recently), there are plenty of readers to choose from, and nearly all sites support RSS feeds. It’s once of those pieces of tech that hasn’t really gotten a replacement, or needed one. In principle, it works pretty well in a modern age.

When there’s support for it.

For the life of me, I couldn’t find a nice RSS aggregator for Android. Most of them used odd formats (which meant I couldn’t read on my laptop too, without some ad-hoc conversion), or were just.. clunky.

selfoss: an RSS reader for the modern(?) era

selfoss is a self-hosted service for gathering and categorizing RSS feeds. It supports multiple users, grouping feeds, sorting feeds, and many database backends.

The selling points for me though: a solid web app, and a just as solid Android app.

Since both apps were just frontends to the same service, I didn’t have to worry about passing files between my phone and my laptop, or anything of that nature.

Getting started was really simple! Check out the selfoss webpage, Github repo, or the Docker image I use. The configuration is easy: create a config.ini from the included default.ini. Set up a database if you want to (otherwise, it’ll use the included sqlite backend), and that’s it! Accessing the service through your web browser will prompt for the configured username and password, and you can begin adding RSS feeds.

A couple gotchas

Be sure to set up SSL, especially if you use the Android app. From the app, as of writing, requests are done a little differently. Instead of saving a cookie on login, the login username and password is sent as URL parameters. By that I mean, it’ll send tons of requests in the form of http://selfoss.example.com/?username=myusername&password=mypassword. This means login info will be regularly sent in plain text. Don’t do that.

Feeds only update when you tell them to. As the selfoss installation section illustrates: Create cronjob for updating feeds and point it to https://yoururl.com/update via wget or curl. You can also execute the cliupdate.php from commandline. The Docker image linked above does this automatically, and is configurable via the CRON_PERIOD envvar.

So, uh. Where the feeds at?

In more places than you might think.

Most every news site will have a /rss or /feed page. Or a web search for “washington post rss” will link you to it. Many have different feeds for different topics too.

WordPress and other blogs will generally have a feed available, sometimes by tag or category.

Reddit has them too! reddit.com/r/linux/.rss will get you a feed for the hottest posts on the sub. I started my RSS feed gathering journey by picking the top news sites and programming blogs on my favorite subreddits. I’ve mostly removed subreddits and multireddits from my RSS feeds, as I’ve found the sources I like and don’t like.

As an added bonus, I get news earlier than my Redditor friends do. 🙂

selfoss lets you export your feeds and their tags as well! So if you’re curious, here’s where my tech, political, and local news are all coming from.

Keeping up

RSS and selfoss work really well for me, and help me keep up with what’s going on around the web.

New organizational tools and technologies are nice to check out, but sometimes a new take on an old classic hits the spot.

by Josh Bicking at March 03, 2019 10:14 PM

March 02, 2019

nic-hartley

Using Grammarly in Opera Browser

First, you need to add the Install Chrome Extentions plugin for Opera.

Install Chrome Extentions

Next, go to the Chrome Web Store and find the Grammarly extension. This is where the "Install Chrome Extensions" plugin kicks in and allows you to directly install the Grammarly plugin from the Chrome Web Store. After you click install, you will be taken to the Opera extensions screen and you need to finalize the process by accepting the permissions and clicking Install.

And that's it! next time you post on Dev.to, you can make use of Grammarly to write and review your post. Simply click the small Grammarly icon in the bottom right-hand corner of the text area in your next new post.

Imgur

by Nic Hartley at March 02, 2019 11:07 AM

Make your builds quicker with Gulp 4

Gulp version 4 is out! The migration is pretty straight forward and involves minimum breaking changes while bringing one very useful feature: to be able to parallelize tasks.

Upgrading from 3 to 4

You can follow one of the most popular Medium post on making the transition successfuly.

Sam also wrote on his recipe on how to make the transition a breeze.

Compressing images with Gulp 3

Before Gulp 4, this is what you might have done to compress images.

const gulp = require("gulp");
const imagemin = require("gulp-imagemin");
const webp = require("gulp-webp");

gulp.task("pictures", function() {
  return gulp.src("src/img/**/*.{jpg,jpeg,png,svg}")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("webp", function() {
  return gulp.src("src/img/**/*.{jpg,jpeg,png}")
    .pipe(webp())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("img", ["pictures", "webp"]);

Which means

Compress my jpeg, png and svg files one by one, and wait before they are all compressed before converting them into webp.

Which is fine, but the main caveat is that your picture tasks will have to process all your images one by one.

If we think about it, we could split our process by file types: png, jpeg, svg. This is possible because gulp-imagemin uses different libraries to compress images (SVGO, PNGQuant, JPEGTran).

Compressing images with Gulp 4

First let us keep the same algorithm, and use the new gulp.series() method.

const gulp = require("gulp");
const imagemin = require("gulp-imagemin");
const webp = require("gulp-webp");

gulp.task("picture", function() {
  return gulp.src("src/img/**/*.{png,jpg,jpeg,svg}")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("webp", function() {
  return gulp.src("src/img/**/*.{png,jpg,jpeg}")
    .pipe(webp())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("img", gulp.series("picture", "webp"));

Doing the same, but with modern methods

If you run gulp img on your console, you will have the same output. Now we use the latest Gulp 4 features, pretty straight forward migration!

Let us split up our picture task.

const gulp = require("gulp");
const imagemin = require("gulp-imagemin");
const webp = require("gulp-webp");

gulp.task("png", function() {
  return gulp.src("src/img/**/*.png")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("jpg", function() {
  return gulp.src("src/img/**/*.{jpg,jpeg}")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("svg", function() {
  return gulp.src("src/img/**/*.svg")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("webp", function() {
  return gulp.src("src/img/**/*.{png,jpg,jpeg}")
    .pipe(webp())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("img", gulp.series("png", "jpg", "svg", "webp"));

Again, nothing changed, only making it more easier for what comes next.

Now the fun part: let us make the 3 first tasks run in parallel.

const gulp = require("gulp");
const imagemin = require("gulp-imagemin");
const webp = require("gulp-webp");

gulp.task("png", function() {
  return gulp.src("src/img/**/*.png")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("jpg", function() {
  return gulp.src("src/img/**/*.{jpg,jpeg}")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("svg", function() {
  return gulp.src("src/img/**/*.svg")
    .pipe(imagemin())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("webp", function() {
  return gulp.src("src/img/**/*.{png,jpg,jpeg}")
    .pipe(webp())
    .pipe(gulp.dest("dist/img"));
});

gulp.task("img", gulp.series(gulp.parallel("png", "jpg", "svg"), "webp"));

Which means:

Compress all the png, jpg and svg files in whatever order you want, but wait for all before starting converting them into webp.

Using the new gulp.parallel(), this is a piece of cake to perform parallelized tasks!

Going further

Now this is better, but there is still one little thing that bugs me. If you noticed, this is the blob used for our pictures:

".{png,jpg,jpeg,svg}"

But the webp blob is missing the svg:

".{png,jpg,jpeg}"

I did this on purpose because I do not want to convert my SVG files into WebP: they are perfect to respond to responsiveness while keeping a high quality so I do not want to loose this feature.

This also means our that webp task does not have to wait before svg to finish to be compressed. So we could add another optimization layer like following.

gulp.task('picture', gulp.parallel('png', 'jpg'));

gulp.task('img', gulp.parallel(gulp.series('picture', 'webp'), 'svg');

Which means:

Compress SVG files in the same time as compressing jpg and png files with webp waiting for jpg and png files before converting them into webp.

Conclusion

I love Gulp for its great user experience. Building complex bundling logic is just so neat and clear.

Check out the documentation if you want to know more about gulp and all its features, including watching file changes and perform tasks as you update a file, and plenty more.

Gulp also supports modern Javascript notation, so you might want to write your fancy tasks like:

const { src, dest, series, parallel } = require("gulp");
const imagemin = require("gulp-imagemin");
const webp = require("gulp-webp");

const png = () => src("src/img/**/*.png")
  .pipe(imagemin())
  .pipe(dest("dist/img"));

const jpg = () => src("src/img/**/*.{jpg,jpeg}")
  .pipe(imagemin())
  .pipe(dest("dist/img"));

const webp = () => src("src/img/**/*.{png,jpg,jpeg}")
  .pipe(webp())
  .pipe(dest("dist/img"));

const img = series(parallel(png, jpg), webp);

module.exports = { img };

I hope you are as amazed as I am about those new features! Happy bundling!

by Nic Hartley at March 02, 2019 11:02 AM

Go's method receiver: Pointer vs Value

Introduction

This topic is huge, and there is plenty of information online. This blog tries to keep it short, and useful for experienced programmers that are new to Go.

Coming to Go from Python introduced me to a new concept I didn't had to put thought into.
Python is a pass-by-object-reference language. You have no direct control over that.
What this means is, when you pass an object (everything in Python is an object) to a function,
you pass a reference to the object itself.

We can see it using the id() function, which returns the identity of an object. This identity has to be unique and constant for this object during its lifetime.

>>> def action_with_string(s):
...     print(id(s))
...
>>> v = "sample string"
>>> id(v)
4360321520
>>> action_with_string(v)
4360321520
>>>

# A list example

>>> ls = list()
>>> def append_list(l):
...     l.append(1)
...
>>> append_list(ls)
>>> ls
[1]
>>> append_list(ls)
>>> ls
[1, 1]

The object inside a function is the same as the caller object.
Whatever you pass, can be mutated (as long as it is mutuable). It is discussed on StackOverflow: Python functions call by reference, if you are interested reading further.

With Go, when you define a method on a struct, you choose if the receiver (the object which the method is executed on, kind of self in Python) is of type Value or a Pointer.

What does this means, anyway?

In simple terms, value receiver makes a copy of the type and pass it to the function. The function stack now holds an equal object but at a different location on memory.

Pointer receiver passes the address of a type to the function. The function stack has a reference to the original object.

A simple example shows the difference.

package main

import (
    "fmt"
)

type Bike struct {
    Model string
    Size int
}

func ValueReceiver(b Bike) {
    fmt.Printf("VALUE :: The address of the received bike is: %p\n", &b)
    // Address of the object is different than the ones in main
    // Changing the object inside the scope here won't reflect to the caller
    b.Model = "BMW"
    fmt.Println("Inside ValueReceiver model: ", b.Model)
}

func PointerReceiver(b *Bike) {
    fmt.Printf("POINTER :: The address of the received bike is: %p\n", b)
    // Address of the object is the same as the ones in main
    b.Model = "BMW"
    fmt.Println("Inside PointerReceiver model: ", b.Model)
}

func main() {
    v := Bike{"Honda CBR", 650}
    p := &Bike{"Suzuki V-Storm", 650}

    fmt.Printf("Value object address in main: %p\nPointer object address in main: %p\n\n", &v, p)
    ValueReceiver(v)
    fmt.Println("Value model outside the function: ", v.Model)

    fmt.Println("")
    PointerReceiver(p)
    fmt.Println("Pointer model outside the function: ", p.Model)
}


// OUTPUT
Value object address in main: 0x40a0e0
Pointer object address in main: 0x40a0f0

VALUE :: The address of the received bike is: 0x40a100
Inside ValueReceiver model:  BMW
Value model outside the function:  Honda CBR

POINTER :: The address of the received bike is: 0x40a0f0
Inside PointerReceiver model:  BMW
Pointer model outside the function:  BMW

So when should you use what?

Regardless of what you choose, it is best practice to keep uniformity of the struct methods. If the struct uses both types, it is hard to track which methods uses which types, especially if you weren't the one who wrote the code.
So try your best they all use the same receiver for consistency.

Pointers

  • If you want to share a value with it's methods

If the method mutates the state of the type, you must use a pointer or it won't work as expected. Value receiver changes are local to the copied object and lives only inside the function scope .

  • If the struct is very large (optimization)

Large structs with multiple fields may be costy to copy everytime they need to be passed around.
If you are in such case, I think you should consider breaking it down to smaller pieces. If that's not possible, use pointer.
Optimization usaully adds complexity. Always think twice before you use it.

Note that pointers are not safe for concurrency; hence you need to handle it yourself using synchronous mechanism such as channels, or the use of atomic or sync builtin packages.

Values

  • If you don't want to share a value, use value receiver.

Value receivers are safe for concurrent access. One of the biggest Go advantages is concurrency, so this is huge plus.
You never can know when you would need it, and if you write library code you can be sure someone at some point will use it concurrently.

Summary

This was an interesting subject for new comers like me. It is a new concept I didn't deal with prior to developing in Go.
After researching the subject thru blogs, documentation, videos and almighty Stackoverflow I have came up with a small set of "rule of thumbs" to remember:

  1. Use the same receiver type for all your methods. This isn't always feasible, but try to.
  2. Methods defines a behavior of a type; if the method uses a state (updates / mutates) use pointer receiver.
  3. If a method don't mutate state, use value receiver.
  4. Functions operates on values; functions should not depend on the state of a type.

Here's a list of great resources if you want to explore deeper:

by Nic Hartley at March 02, 2019 10:53 AM

Front end project advice

Anyone have advice how i should create a front end project with some part using React (i don't wanna use create-react-app)?

by Nic Hartley at March 02, 2019 10:49 AM

ReactJS – Auto Lint & Format on Git Commit with Airbnb Styleguide

This article was initially posted in my blog Coffee N Coding

When you’re working in a team, each developer will have their own style. It’s very important to have a consistent style across all the files.

Looking at a piece of code, you shouldn’t be able to tell who wrote it 😉

With this guide, you’ll be able to set up auto linting and formatting on Git commit.

If you’re a NodeJS developer, read this – NodeJS – Auto Lint & Format on Git Commit with Airbnb Styleguide

It’s divided into 4 parts

You’ll learn

  1. Setup Eslint with Airbnb Style Guide
  2. Setup Formatting with Prettier
  3. Auto Lint & Format on Git Commit
  4. Configure VS Code for Eslint and Prettier

Why you’ll need Linting and Formatting?

  • Clean code
  • Easily find errors, typos, syntax errors
  • Follow the best practices
  • Warning on using deprecated/harmful methods
  • Have a consistent style in code across the team
  • Avoid committing ‘harmful’ code like console.log
  • Make PR awesome, less headache for reviewers!

Setup Eslint with Airbnb Style Guide

Eslint linting utility for JavaScript and JSX, with some nice rules and plugins. Anyone can write rules for Eslint. A simple rule example could be to “avoid using console.log()“

Luckily Airbnb has written a Style Guide for JavaScript which covers most of the best practices they use. It’s basically a collection of different rules. You can read it here – Airbnb JavaScript Style Guide

Step 1 – Install necessary packages by
npm i -D eslint eslint-config-airbnb eslint-plugin-import eslint-plugin-jsx-a11y eslint-plugin-react

Step 2 – Create a new file .eslintrc at the root directory of your project and paste the following

{
  "env": {
    "browser": true
  },
  "extends": ["airbnb", "prettier"]
}

Step 3 – Add a new command to lint in package.json"lint": "eslint 'src/**/*.{js,jsx}' --fix"

Now you should be able to able lint your code by running npm run lint. It will try to fix errors that are fixable. Otherwise will throw errors/warnings

Setup Formatting with Prettier

While Eslint is for Linting and finding errors in the code, Prettier is purely for formatting. Besides JavaScript Prettier also supports formatting json, HTML, CSS, markdown, sql, yaml, etc. Using both Eslint and Prettier is highly recommended.

Step 1 – Install Prettier CLI package by npm i -D prettier-eslint-cli eslint-config-prettier

Step 2 – Add a new command to format in package.json"format": "prettier --write 'src/**/*.{js,jsx,css,scss}'"

Just like we did earlier you should now able to run npm run format to format the code using Prettier!

Auto Lint & Format on Git Commit

Even though we’ve built commands to run lint and formatting, most of the time developers forget to run it and commit to git. You can set up npm run lint to your CI/CD so that whenever there are some errors it will fail. However, it will be really nice if we would do these checks every time when someone commits.

Husky and Lint-staged to rescue

Husky allows you to add commands to run before committing. It takes advantage of the git hooks.

Lint-staged – “Run linters against staged git files”. Running Eslint and Prettier on all files on every commit will be very time-consuming. With lint-staged you can run those only on the staged files.

Install husky and lint-staged by npm i -D husky lint-staged

You’ll need to edit the package.json to configure it. Here is the full file:

{
  "scripts": {
    "lint": "eslint 'src/**/*.{js,jsx}' --fix",
    "format": "prettier --write 'src/**/*.{js,jsx,css,scss}'"
  },
  "lint-staged": {
    "**/*.js": [
      "eslint --fix",
      "prettier-eslint --write",
      "git add"
    ]
  },
  "husky": {
    "hooks": {
      "pre-commit": "lint-staged"
    }
  },
  "devDependencies": {
    "eslint": "^5.15.0",
    "eslint-config-airbnb": "^17.1.0",
    "eslint-config-prettier": "^4.1.0",
    "eslint-plugin-import": "^2.16.0",
    "eslint-plugin-jsx-a11y": "^6.2.1",
    "eslint-plugin-react": "^7.12.4",
    "prettier-eslint-cli": "^4.7.1",
    "husky": "^1.3.1",
    "lint-staged": "^8.1.3"
  }
}

We tell husky to run lint-staged on every commit. Lint-staged will run eslint, prettier, and ‘git add‘ on staged files. The last ‘git add‘ is to add the changed filed back to commit since it might be changed formatting.

Need to commit without these checks?


What if there is a fire 🙂 and you try to commit? “Please remove console logs” or something like that? You tell git not to run these hooks by adding --no-verify (git commit –m -n “Urgent commit!”)

Configure VS Code for Eslint and Prettier

Both Eslint and Prettier have great integrations for VS Code. It will automatically highlight errors/warnings, fix code while typing/saving, etc.

Install Eslint and Prettier extensions by ext install dbaeumer.vscode-eslint and ext install esbenp.prettier-vscode

Once you’ve installed the extensions, open VS Code settings.json (Ctrl+,) file and add the following:

{
  "editor.formatOnPaste": true,
  "editor.formatOnSave": true,
  "editor.formatOnType": true,
  "prettier.eslintIntegration": true
}

Conclusion

You should now have Eslint and Prettier configured so that whenever you try to commit files, they’ll scan the files and try to fix all error, or show you the errors that are not automatically fixable. Hope you enjoyed it.

Comments below if you run into any problems or has any other feedback!

This article was initially posted in my blog Coffee N Coding. Follow me on Twitter, I share a lot of cool stuff like this.

Subscribe to my blog via FB Messenger

by Nic Hartley at March 02, 2019 10:42 AM

I developed "Animated Background" a Flutter app for implementing "iOS Login" UI/UX design by Mike Ivanchyshyn

This app is part of a collection of apps that I developed in Flutter with the purpose of mastering the necessary skills for implementing custom UI/UX designs in Android and iOS.

You can find the entire collection of apps in this blog post and all the source code in this GitHub repository.

Animated Background

Original design by Mike Ivanchyshyn:

Original design

iOS demo:

iOS demo

Android demo:

Android demo

by Nic Hartley at March 02, 2019 10:39 AM

How to bypass no paste controls on a web form

Just because a site says we can't paste into a field, doesn't mean we have to believe it.

Inspired by this blog post:

dev.to/claireparker/how-to-prevent-pasting-into-input-fields-nn

Clair Parker-Jones shows how to prevent people pasting into input fields. This is common code and you'll see it on StackOverflow a lot. Claire's post seemed to receive a lot of flack comments but, people do this and she just wanted to learn how it was done and shared that knowledge. She also put the time into creating a codepen example which you can explore and experiment with.

I forked the example code in here:

https://codepen.io/eviltester/pen/WPpJGo

This is terrible UX pattern but we see it all the time. And as testers we have to work with it, or workaround it.

How to bypass no paste code?

So how do we bypass it?

  • inspect and remove listener in the dev tools
  • with code from the console:
document.getElementById("paste-no").onpaste={};
document.getElementById("paste-no").onpaste=null;
document.getElementById("paste-no").onpaste=new function(){};

If it wasn't in a frame it would be easy to create a bookmarklet. Creating a bookmarklet can be done, but it is a little bit more complicated than if it wasn't in a frame. For information on bookmarklets and frames see https://www.irt.org/articles/js170/

Everything in the GUI is ours to observe, interrogate and manipulate. Which is why as testers, the more we understand the technology and the tools, the more we open up possibilities and options in our testing. And we should not limit our testing to the obvious 'happy' paths in the GUI.

If you are interested in learning this type of thing then I have an online course:

https://eviltester.com/techwebtest101

I have a follow on exclusive video for Patreons which shows showing another way to bypass the pasting (amending the WebElement value attribute) and discussing this in more detail in relation to Software Testing, Risk and Bug & UX Advocacy.

https://www.patreon.com/posts/24482175

Free Video Showing How to Paste into No Paste Fields

by Nic Hartley at March 02, 2019 10:25 AM

Communication is Key

One thing I learned doing a lot of both customer projects as well product development: One main reason for failing development projects is communication problems. There are definitely a lot of other reasons (see References), but this a a recurring scheme that I saw.

Or to put it the other way around: Don't ever try to lower development project costs by reducing communication a team needs.

You may have been in situation like this:

[Team]   : We should set up this meeting where we discuss technical issues.
[Manager]: Who we should send?
[Team]   : Uh...the whole team?
[Manager]: *calculates the costs of whole team by 1h* 
           No, pick one.

or this:

[Team]   : Can we have a proper kick-off together with the second team in one location?
[Manager]: *calculates the travel costs of sending a whole team to the other location* 
           No, we could send the architect and maybe the Scrum Master.

Usually, one would probably argue a bit until giving up thinking "OK, maybe it will still work out without the whole team."

Usually, it won't.

Team

Development most often starts with a team.

There is a good reason why Scrum actually defines certain meetings like Planning and Refinement that must be held with the whole team.

There is the Daily that allows the whole team to see the current development status and, more importantly, if there are any impediments. This is an important part to let product owners or project leaders to know about risks quickly.

The Refinement will allow to talk about the requirements and if they make sense and are well-specified. Bad requirements usually lead to bad software or - even worse - to large delays.

But this isn't about Scrum or Agile. If you restrict communication to just one member or some fraction of the team, it will slap back later.

One important part of regular team communication is knowledge sharing. E.g. we spend regular team-time on Pair Programming, Coding Dojos and some way of exchanging small helpful tips with the team.

More importantly, trust the team. No developer likes to spend their whole day in meetings instead of coding. If developers ask to be on a meeting because they thinks it's necessary, let them.

If they complain about to many meeting, try to reduce the number.

Remote Teams

Adding a second team to a development project, adds non-linear complexity to everything, especially communication. This is increased by having teams at different locations. If there are no precautions, this can lead to disaster.
This also applies to one team with developers working from remote locations permanently to some degree.

Kick-Off

Let us come back to the conversation from the beginning. It's utterly important to put remote teams together physically once in a while if you can't risk the project to fail. Even if it has high travel costs.

Developers are - surprise, surprise - no machines. When there are conflicts between individuals (and there will be some), it is really hard to guess what some spoken words actually mean or more precise, what the person thinks at that moment.

Here are some things that could happen, when working remote (and yes, they could also happen when not working remotely):

  • Some people are shy, they won't simply say something even if they oppose
  • Some people make use of irony / sarcasm which is sometimes hard to guess in mails or voice communication
  • Some people write harsh emails but don't mean it
  • People behave differently in remote meeting than at the coffee machine
  • Teams start to divide in "them" vs. "us" instead of just "us"

An there is certainly more. This is why it is important to put people together. At least once at the beginning. People need to learn to kind of read other peoples minds by seeing them, taking to them, making fun, pushing the boundaries and to norm. It's like learning to handle a new tool or technology.

Just as for a team, the "forming - storming - norming - performing" phases apply well to multiple teams if they need to work on one product.

Some of the "not seeing each other" problems could be reduced by video calls but they just don't work that good when having more than two people in a call due to low resolution an missed facial expressions.

The foundation of a good relation is very often made at the coffee machine or by spending lunch together. This is only possible when being in one location physically.

Talking to Customers

One other scheme I've seen is to let developer develop and others like project mangers do the talking. Talking to customers and especially users. When there is a direct customer, there might be a chance that there will be a product owner that represents the customer and the users, but still it's very helpful to have direct user contact (see User Centered Design).

But it is very important for the team to know the customers and users to be able to help steering in the right direction in Refinements or even during implementation. There are often some "A or B" decisions. Knowing the user helps developers make the right decisions.

Costs

Costs are often the killer argument for not allowing communication of all sorts. However, this is very short-sighted.

Communication issues will lead to project delays due to misunderstandings and uncompensatable sick leaves, lower quality products, products that do the wrong thing, unhappy customers and demotivated team members.

All of the above are severe but hidden costs that could and should be avoided.

Communication Trainings

Not all communication issues can be solved by just letting people talk to each other. Sometimes it's the way how people talk or how the interpret the spoken words. Often developers that are really good at technology aren't that good at talking. I know it's a prejudice but to be honest, I've seen a lot of developers including myself that could improve their social skills.

Think about joining a training that improves those skills. This might be about communicating the right things in the right way or about resilience on how to deal with conflicts without taking every argument personally and much more.

Conclusion

Communication is one important part and skill of developers and thus key to successful software development projects. Suppressed communication will lead to hidden costs and project risks and thus should be avoided by planning communication and its costs upfront. Trust teams to find the right balance between too little and to much regular meetings and allow team members to see each other in person once in a while.

References

by Nic Hartley at March 02, 2019 10:09 AM

Refactor a form with React Hooks and useState

Introduction

React Hooks are one of those things I decided I would look at later. I've read and heard great things about it, so I later is now. I had a component with a form that I thought could be refactored using hooks, so I started with that. Always easier to begin with small steps.

Before

Nothing fancy, we use the material-ui framework to create a Dialog component. Then we have three TextFields ( text inputs ) inside of it:


export default class AddItemPopup extends React.Component {

    constructor(props){
        super(props)
        this.state = {
            name: '',
            quantity: 0,
            unitCost: 0
        }
    }

    handleInputChange = e => {
        const {name, value} = e.target
        this.setState({
            [name]: value
        })
    }

    addItem = () => {
        const {name, quantity, unitCost} = this.state

        if(!name || !quantity || !unitCost) return

        this.props.saveItem(this.state)
    }

    render(){

        const {open, closePopup} = this.props
        const {name, quantity, unitCost} = this.state
        return(
            <Dialog 
                open={open}
                onClose={closePopup}>
                <DialogTitle>Add new item</DialogTitle>
                <DialogContent>
                    <TextField 
                        name='name'
                        label='Item name/Description'
                        onChange={this.handleInputChange}
                        value={name}/>
                    <TextField 
                        name='quantity'
                        label='Quantity'
                        onChange={this.handleInputChange}
                        value={quantity}/>
                    <TextField 
                        name='unitCost'
                        label='Unit Cost'
                        onChange={this.handleInputChange}
                        value={unitCost}/>
                </DialogContent>
                <DialogActions>
                    <Button onClick={closePopup} color="secondary" variant="contained">
                        Cancel
                    </Button>
                    <Button onClick={this.addItem} color="primary" variant="contained">
                            Save
                    </Button>
                </DialogActions>
            </Dialog>
        )
    }
}

I saved you the imports at the top of the file, but you got the idea. A class component with a form and a state to keep track of the form inputs' values. Now, let's rewrite this component by using the useState hook.

// Import the hook first
import React, {useState} from 'react'

const AddItemPopup = ({
    open, 
    closePopup,
    saveItem
}) => {

    const handleInputChange = e => {
        const {name, value} = e.target
        setValues({...values, [name]: value})
    }

    const addItem = () => {
        const {name, quantity, unitCost} = values

        if(!name || !quantity || !unitCost) return

        saveItem(values)
    }
        // Declare our state variable called values
        // Initialize with our default values

    const [values, setValues] = useState({name: '', quantity: 0, unitCost: 0})
    return(
        <Dialog 
        open={open}
        onClose={closePopup}>
        <DialogTitle>Add new item</DialogTitle>
            <DialogContent>
                <TextField 
                    name='name'
                    label='Item name/Description'
                    onChange={handleInputChange}
                    value={values.name}/>
                <TextField 
                    name='quantity'
                    label='Quantity'
                    onChange={handleInputChange}
                    value={values.quantity}/>
                <TextField 
                    name='unitCost'
                    label='Unit Cost'
                    onChange={handleInputChange}
                    value={values.unitCost}/>
            </DialogContent>
            <DialogActions>
                <Button onClick={closePopup} color="secondary" variant="contained">
                    Cancel
                </Button>
                <Button onClick={addItem} color="primary" variant="contained">
                        Save
                </Button>
            </DialogActions>
        </Dialog>
    )
}

export default AddItemPopup

BOOM! Our component became a function now. What did we do:

  • useState returns two things: the current state ( here as values ) and a function that lets you update it ( here as setValues )
  • useState takes one argument: the initial state.
  • The onChange handler function now uses this setValues function to modify the internal state of the component. As you can see, the values variable is accessible everywhere is the component.

Note: We could have used three different hooks to update each input separately, whatever you think might be more readable to you ;)

by Nic Hartley at March 02, 2019 09:56 AM

Hooks for React.js - the new ketchup?

alt text

Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris

Hooks are an upcoming feature that lets you use state and other React features without writing a class component - functions FTW.

Hooks is the latest pattern and an experimental feature that's supposedly better than sliced bread or ketchup, you decide ;). Everyone used to go nuts over Render props but now it's all hooks.

Hooks are now available in React 16.8 so you might need to upgrade your React project

Problems Hooks are trying to address

Every time something new comes out we get excited. It's ketchup, it's the best thing since sliced bread and so on. We hope that this will finally be the solution to all our problems, so we use it, again and again, and again. We've all been guilty of doing this at one time or another, abusing a pattern or paradigm and yes there has always been some truth to it that the used pattern has been limited.

Below I will try to lay out all the different pain points that make us see Hooks as this new great thing. A word of caution though, even Hooks will have drawbacks, so use it where it makes sense. But now back to some bashing and raving how the way we used to build React apps were horrible;)

There are many problems Hooks are trying to address and solve. Here is a list of offenders:

  • wrapper hell, we all know the so-called wrapper hell. Components are surrounded by layers of providers, consumers, higher-order components, render props, and other abstractions, exhausted yet? ;)

Like the whole wrapping itself wasn't bad enough we need to restructure our components which is tedious, but most of all we lose track over how the data flows.

  • increasing complexity, something that starts out small becomes large and complex over time, especially as we add lifecycle methods
  • life cycle methods does too many things, components might perform some data fetching in componentDidMount and componentDidUpdate. Same componentDidMount method might also contain some unrelated logic that sets up event listeners, with cleanup performed in componentWillUnmount

Just create smaller components?

In many cases, it's not possible because:

  • difficult to test, stateful logic is all over the place, thus making it difficult to test
  • classes confuse both people and machines, you have to understand how this works in JavaScript, you have to bind them to event handlers etc. The distinction between function and class components in React and when to use each one leads to disagreements and well all know how we can be when we fight for our opinion, spaces vs tabs anyone :)?.
  • minify issues, classes present issues for today's tools, too. For example, classes don't minify very well, and they make hot reloading flaky and unreliable. Some of you might love classes and some of you might think that functions is the only way. Regardless of which we can only use certain features in React with classes and if it causes these minify issues, we must find a better way.

The selling point of Hooks

Hooks let you use more of React's features without classes. Not only that, we are able to create Hooks that will allow you to:

  • extract stateful logic from a component, so it can be tested independently and reused.
  • reuse stateful logic, without changing your component hierarchy. This makes it easy to share Hooks among many components or with the community.

What is a hook?

Hooks let you split one component into smaller functions based on what pieces are related (such as setting up a subscription or fetching data), rather than forcing a split based on lifecycle methods.

Let's have an overview of the different Hooks available to use. Hooks are divided into Basic Hooks and Additional Hooks. Let's list the Basic Hooks first and mention briefly what their role is:

Basic Hooks

  • useState, this is a Hook that allows you to use state inside of function component
  • useEffect, this is a Hook that allows you to perform side effect in such a way that it replaces several life cycle methods
  • useContext, accepts a context object (the value returned from React.createContext) and returns the current context value, as given by the nearest context provider for the given context. When the provider updates, this Hook will trigger a re-render with the latest context value.

We will focus on useState and useEffect in this article.

Additional Hooks

We will not be covering Additional Hooks at all as this article would be way too long but you are encouraged to read more about them on Additional Hooks

  • useReducer, alternative to useState, it accepts a reducer and returns a pair with the current state and a dispatch function
  • useCallback, will return a memoized version of the callback that only changes if one of the inputs has changed. This is useful when passing callbacks to optimized child components that rely on reference equality to prevent unnecessary renders
  • useMemo, passes a create function and an array of inputs. useMemo will only recompute the memoized value when one of the inputs has changed. This optimization helps to avoid expensive calculations on every render.
  • useRef, returns a mutable ref object whose .current property is initialized to the passed argument (initialValue). The returned object will persist for the full lifetime of the component
  • useImperativeHandle, customizes the instance value that is exposed to parent components when using ref
  • useLayoutEffect, the signature is identical to useEffect, but it fires synchronously after all DOM mutations. Use this to read layout from the DOM and synchronously re-render
  • useDebugValue, can be used to display a label for custom Hooks in React DevTools

As you can see above I've pretty much borrowed the explanation for each of these Additional Hooks from the documentation. The aim was merely to describe what exists, give a one-liner on each of them and urge you to explore the documentation once you feel you've mastered the Basic Hooks.

useState Hook

This Hook lets us use state inside of a function component. Yep, I got your attention now right? Usually, that's not possible and we need to use a class for that. Not anymore. Let's show what using useState hook looks like. We need to do two things to get started with hooks:

  • scaffold a project using Create React App
  • upgrade react and react-dom, this steps necessary if you are on a react version before 16.8

The first one we will solve by typing:

npx create-react-app hooks-demo

next, we need to upgrade react and react-dom so they are using the experimental version of React where hooks are included:

yarn add react@next react-dom@next

Now we are good to go.

Our first Hook

Let's create our first hook using useState and focus on just understanding how to use it. Let's see some code:

import React, { useState } from 'react';
const Counter = () => { 
  const [counter, setCounter] = useState(0); 

  return ( 
    <div> {counter} 
      <button onClick={() => setCounter(counter +1)}>
      Increment
      </button> 
   </div> 
  ) 
}

export default Counter;

Ok, we see that we use the Hook useState by invoking it and we invoke it like so:

useState(0)

This means we give it an initial value of 0. What happens next is when we invoke useState we get an array back that we do a destructuring on. Let's examine that closer:

const [counter, setCounter] = useState(0);

Ok, we name the first value in the array counter and the second value setCounter. The first value is the actual value that we can showcase in our render method. The second value setCounter() is a function that we can invoke and thereby change the value of counter. So in a sense, setCounter(3) is equivalent to writing:

this.setState({ counter: 3 })

A second Hook example - using a cart

Just to ensure we understand how to use it fully let's create a few more states:

import React, { useState } from 'react';
const ProductList = () => { 
  const [products] = useState([{ id: 1, name: 'Fortnite' }]); 
  const [cart, setCart] = useState([]);

  const addToCart = (p) => { 
    const newCartItem = { ...p }; 
    setCart([...cart, newCartItem]); 
  }

  return ( 
    <div> 
      <h2>Cart items</h2> 
      {cart.map(item => <div>{item.name}</div>)} 
     <h2>Products</h2> 
     {products.map(p => <div onClick={() => addToCart(p)}>{p.name}</div>)} 
    </div> 
  ) 
}
export default ProductList;

Above we are creating the states products and cart and we also get change function setCart(). We can see in the markup we invoke the method addToCart() if clicking on any of the items in our products list. This leads to the invocation of setCart(), which leads to the selected product ot be added as a cart item in our cart state.

This is a simple example but it really showcases the usage of setState Hook.

Introducing the Effect Hook

The Effect Hook is meant to be used to perform side effects like for example HTTP calls. It performs the same task as life cycle methods componentDidMount, componentDidUpdate, and componentWillUnmount.

Here is how we can use it:

import React, { useEffect, useState } from 'react';

const products = [
  { id: 1, name: "Fortnite" }, 
  { id: 2, name: "Doom" }
];

const api = { 
  getProducts: () => { return Promise.resolve(products);},
  getProduct: (id) => { return Promise.resolve(
    products.find(p => p.id === id)); 
  } 
}

const ProductList = () => { 
  const [products, setProducts] = useState([]); 
  const [product, setProduct] = useState(''); 
  const [selected, setSelected] = useState(2);

  async function fetchData() { 
    const products = await api.getProducts(); 
    setProducts(products); 
  }

  async function fetchProduct(productId) { 
    const p = await api.getProduct(productId); 
    setProduct(p.name); 
  } 

  useEffect(() => { 
    console.log('use effect'); 
    fetchData(); 
    fetchProduct(selected); 
  }, [selected]);

  return ( 
    <React.Fragment> 
      <h1>Async shop</h1> 
      <h2>Products</h2> 
      {products.map(p => <div>{p.name}</div>)} 
     <h3>Selected product</h3> {product} 
     <button onClick={() => setSelected(1)}>Change selected</button
    </React.Fragment> 
  ); 
}

export default ProductList;

Ok, a lot of interesting things was happening here. Let's start by looking at our usage of useEffect:

useEffect(() => { 
  console.log('use effect'); 
  fetchData(); 
  fetchProduct(selected); 
}, [selected]);

What we are seeing above is us calling fetchData() and fetchProduct(). Both these methods calls methods marked by async. Why can't we just make the calling function in useEffect async? Well that's a limitation of Hooks, unfortunately.
Looking at the definition of these two methods it looks like the following:

async function fetchData() { 
  const products = await api.getProducts(); 
  setProducts(products); 
}

async function fetchProduct(productId) { 
  const p = await api.getProduct(productId); 
  setProduct(p.name); 
}

We see above that we are calling getProducts() and getProduct() on our api object, which both returns a Promise. After having received the resolved Promises, using await we call setProducts() and setProduct() that are functions we get from our useState Hook. Ok, so this explains how useEffect in this case acts like componentDidMount but there is one more detail. Let's look at our useEffect function again:

useEffect(() => { 
  console.log('use effect'); 
  fetchData(); 
  fetchProduct(selected); 
}, [selected]);

The interesting part above is the second argument [selected]. This is us looking at the selected variable and let ourselves be notified of changes, if a change happens to the variable selected then we will run our useEffect function.

Now, try hitting the bottom button and you will see setSelected being invoked which trigger useEffect, because we are watching it.

alt text

Life cycle

Hooks replaces the needs for many life cycle methods in general so it's important for us to understand which ones.
Let's discuss Effect Hooks in particular and their life cycle though.
The following is known about its life cycle:

  • By default, React runs the effects after every render
  • After changes are flushed, our effect is being run after React has flushed changes to the DOM - including the first render

Accessing the DOM tree

Let's talk about when we access the DOM tree, to perform a side effect. If we are not using Hooks we would be doing so in the methods componentDidMount and componentDidUpdate. The reason is we can't use the render method cause then it would happen to early.
Let's show how we would use life cycle methods to update the DOM:

componentDidMount() { 
  document.title = 'Component started'; 
}
componentDidUpdate() { 
  document.title = 'Component updated' 
}

We see that we can do so using two different life cycle methods.
Accessing the DOM tree with an Effects Hook would look like the following:

const TitleHook = () => { 
  const [title, setTitle] = useState('no title');

  useEffect(() => { 
    document.title = `App name ${title} times`; 
  }) 
}

As you can see above we have access to props as well as state and the DOM.

Let's remind ourselves what we know about our Effect Hook namely this:

Our effect is being run after React has flushed changes to the DOM - including the first render

That means that two life cycle methods can be replaced by one effect.

Handling set up/ tear down

Let's now look at another aspect of the useEffect Hook namely that we can, and we should clean up after ourselves. The idea for that is the following:

useEffect(() => { 
  // set up 
  // perform side effect 
  return () => { 
    // perform clean up here 
  } 
});

Above we see that inside of our useEffect() function we perform our side effect as usual, but we can also set things up. We also see that we return a function. Said function will be invoked the last thing that happens.
What we have here is set up and tear down. So how can we use this to our advantage? Let's look at a bit of a contrived example so we get the idea:

useEffect(() => { 
  const id = setInterval(() => console.log('logging'));

  return () => { 
    clearInterval(id); 
  } 
})

The above demonstrates the whole set up and tear down scenario but as I said it is a bit contrived. You are more likely to do something else like setting up a socket connection, for example, e.g some kind of subscription, like the below:

onMessage = (message) => { 
  // do something with message
}

useEffect(() => { 
  chatRoom.subscribe('roomId', onMessage) 

  return () => { 
    chatRoom.unsubscribe('roomId'); 
  } 
})

Can I create my own Hook?

Yes you can. With useState and useEffect the world is your oyster. You can create whatever Hook you need.

Ask yourself the following questions; Will my component have a state? Will I need to do a DOM manipulation or maybe an AJAX call? Most of all, is it something usable that more than one component can benefit from? If there are several yes here you can use a Hook to create it.

Let's look at some interesting candidates and see how we can use Hooks to build them out:

You could be creating things like:

  • a modal, this has a state that says whether it shows or not and we will need to manipulate the DOM to add the modal itself and it will also need to clean up after itself when the modal closes
  • a feature flag, feature flag will have a state where it says whether something should be shown or not, it will need to get its state initially from somewhere like localStorage and/or over HTTP
  • a cart, a cart in an e-commerce app is something that most likely follows us everywhere in our app. We can sync a cart to localStorage as well as a backend endpoint.

Feature flag

Let's try to sketch up our Hook and how it should be behaving:

import React, { useState } from 'react';

function useFeatureFlag(flag) { 
  let flags = localStorage.getItem("flags"); flags = flags ? JSON.parse(flags) : null;
  const [enabled] = useState(Boolean(flags ? flags[flag]: false));

  return [enabled]; 
}
export default useFeatureFlag;

Above we have created a Hook called useFeatureFlag. This reads its value from localStorage and it uses useState to set up our hook state. The reason for us not destructuring out a set method in the hook is that we don't want to change this value unless we reread the whole page, at which point we will read from localStorage anew.

Testing our Hook

Now that we have created our custom Hook, let's take it for a spin. The idea is for whatever component that uses our Hook to only read from its value. How that feature flag value is stored is up to the hook. So the Hook is an abstraction over localStorage.

import React from 'react'; 
import useFeatureFlag from './flag';
const TestComponent = ({ flag }) => { 
  const [enabled] = useFeatureFlag(flag); 

  return ( 
    <React.Fragment> 
      <div>Normal component</div> 
     {enabled && 
       <div>Experimental</div> 
     } 
   </React.Fragment> ); 
};
export default TestComponent;


// using it 
<TestComponent flag="experiment1">

Creating an Admin page for our Feature Flag

We said earlier that we weren't interested in changing the value exposed by useFeatureFlag. To control our feature flags, we opt for creating a specific Admin page. We count on the Admin page to be on a specific page and the component with the feature flag on another page. If that is the case, then navigating between the two pages will mean the feature flag component reads from localStorage.

Back to the Admin page, it would be neat if we could list all the flags and toggle them any way we want to. Let's write such a component. Our component should be quite simple as it should only render a list of flags. However, it will need to be able to update a flag when the user chooses to.

We will need the following:

  • a simple list component, that renders all the feature flags and supports the toggling of a specific flag
  • a Hook, that is an abstraction over localStorage but that is also able to update its state

The code follows below:

import React, { useState } from 'react';
const useFlags = () => { 
  let flags = localStorage.getItem("flags"); flags = flags ? JSON.parse(flags) : {};
  const [ flagsValue, setFlagsValue ] = useState(flags);

  const updateFlags = (f) => { 
    localStorage.setItem("flags", JSON.stringify(f));
    setFlagsValue(f); 
  }

  return [flagsValue, updateFlags]; 
}

const FlagsPage = () => { 
  const [flags, setFlags] = useFlags(); 

  const toggleFlag = (f) => { 
    const currentValue = Boolean(flags[f]); 
    setFlags({...flags, flags[f]: !currentValue}); 
  }

  return ( 
    <React.Fragment> 
      <h1>Flags page</h1> 
      {Object
        .keys(flags)
        .filter(key => flags[key]).map(flag => 
          <div>
           <button onClick={() => toggleFlag(flag)}>{flag}</button
          </div>
        )
      } 
   </React.Fragment> 
  ) 
}
export default FlagsPage;

What we are doing above is to read out the flags from localStorage and then we render them all out in the component. While rendering them out, flag by flag, we also hook-up ( I know we are talking about Hooks here but no pun intended, really :) ) a method on the onClick handler. That method is toggleFlag() that lets us change a specific flag. Inside of toggleFlag() we not only set the new flag value but we also ensure our flags have the latest updated value by us invoking setFlags on the Hook.

It should also be said that us creating useFlags Hook have made the code in FlagsPage component quite simple, so hooks are good at cleaning up a bit too.

Summary

In this article, we have tried to explain the background and the reason Hooks where created and what problems it was looking to address and hopefully fix.
We've learned that Hooks is a way to give functional components state but that they are also able to remove the need for some lifecycle methods. There are a lot of Hooks given to you out of the box like the following two Hooks:

  • useState, is a Hook we can use to persist state in a functional component
  • useEffect, is also a Hook but for side effects

but there are many many more that I urge you to go explore, like these:

Additional Hooks https://reactjs.org/docs/hooks-reference.html#additional-hooks

With Hooks we can create really cool and reusable functionality, so go out there, be awesome and create your own hooks.

Further reading

I welcome any comments or maybe a link to a Hook you built :)
Stay awesome out there !

by Nic Hartley at March 02, 2019 09:30 AM