Quantcast
Channel: Andela
Viewing all 615 articles
Browse latest View live

Are Technical Skills Overrated?

$
0
0

Hold on, before you go all out on me, I’m a software engineer myself and as such, I understand the impact of technical knowledge. The impact of knowing React, NodeJS, Python, Ruby, Rust, PHP, and whatnot — you could build an Uber or Facebook and solve some of the world’s toughest problems with that knowledge. You could probably build your own Amazon, Microsoft, Google, Slack, SafeBoda, Jumia, Twitter and any other tech giant you could think of but what enables you to thrive while at it is not exactly the tech that you know, as we will see further on in this article.

First things first, think about this: was any of those tech giants built to their current capacity by a single individual in a dark room with a hoodie over their head and lots of caffeine in their system? The answer is most definitely no. It took several people to bring those ground-breaking applications to the glory they have today. This simply means the knowledge of how to code isn’t all there is to it. It takes a team and as such, one must be able to work productively in a team. That right there is where technical skills tend to get overrated.

I once worked on a project with a software engineer who didn’t have much programming experience and it was very evident in the quality of work he delivered. It was probably the toughest time of my work — not because he didn’t know what to do sometimes (because that happens even to the best in this industry), but because he was never receptive to feedback that required him to think differently about things, however nice we’d try to craft it. He just never liked being shown that what he’d done could be improved and as such, we ran into multiple problems whenever anyone was working on a task that related to his. We would have to change a lot of what he touched and as such, the progress was really slow and in some cases, the quality of the work we had in the end product by release time was subpar.

In contrast, I worked with another person on another team during a boot camp and the experience was completely different. He was not as technically adept as the former engineer, but he acknowledged that and it was very evident that he wanted to learn from those who knew more than him. He always reached out when he didn’t understand something and repeatedly bugged us until he understood it. He always kept us in the loop about what he was working on, in as little proportions as he worked on. He always asked for feedback and he, ironically, wanted to hear the negative feedback more. Whenever you praised his work, he always asked questions like “How would you do it?” followed by “Why would you do it like that?” and if he realized that your approach had performance benefits to the products, he’d strive so hard to implement his next task with those benefits included in his solution. He was the ultimate workmate. Working with him felt like a challenge even to those who knew more than him because he got you to think about why you’re doing what you’re doing and in the process, you all learned together.

One may argue that at the end of the day, it was the technical knowledge that brought this awesome product to life but if you think about it, it wasn’t. It was the fact that our team was teachable, had great communication skills and those aspects powered the technical side of things. Even in a situation where you don’t have a struggling team, moments where you have only superstar engineers on your team, where they seemingly can’t do anything wrong, they most definitely need to remain teachable because user needs evolve, and technologies evolve as well. Today we’re writing Javascript but 30 or so years ago, it didn’t exist. Several decades ago, programs were written in BASIC which is rather obscure today. Technologists have to evolve through many technologies for us to remain relevant and create better solutions for the world.

Anybody can learn code but not everybody is ready to always remain teachable. Not everybody is ready to take in negative feedback positively. Not everybody is ready to change their introvert selves just a small bit to start communicating effectively. Not everybody has a high degree of integrity. A lot of these aspects are attached to one’s personality and that isn’t something that changes over a day or two. In some people, it never changes and yet we’ve seen the depths to which they impact the progress of a product that’s intended to turn out like the next Amazon, say.

People don’t just wake up one day and the grumpy attitude from the previous night (and weeks before) is suddenly turned into a charming and welcoming attitude to their teammates. It takes a much longer process for that to happen, but if it was a matter of writing code, some people can quickly check out a Django documentation and, by the next morning, they could build kickass Django apps. This also introduces the concept of EQ (Emotional Quotient/Intelligence) and IQ (Intelligence Quotient). It is much easier to grow your IQ than it is your EQ and I think this has been a crucial part of Andela’s hiring since the company’s inception and as such, has helped us become a strong force to reckon with. EQ (Emotional Quotient/Intelligence) goes a long way to impact your IQ (Intelligence Quotient).

An engineer that works with EQ as much as they do IQ will forever be relevant and you just can’t afford to ignore them.

I’ll leave you with a short quote from a renowned author.

“There is no separation of mind and emotions; emotions, thinking and learning are all linked.” — Eric Jensen

The post Are Technical Skills Overrated? appeared first on Andela.


How To Stand Out As a Newbie Software Engineer

$
0
0

Learning how to code is arguably the gold rush of this time, as the number of people looking to build careers in technology continues to rise. It is increasingly easier to find learning opportunities – online or offline – which anyone can take advantage of to learn software development. When it comes down to it, getting good at this craft – like with any other worthwhile craft –  takes some doing. Whether you’re doing this as a hobby or as a means to land a first job and launch a career in software, you’re going to need to know how to handle yourself and navigate your way to where you need to go.

If you recently started to code, the following tips should be of help to you:

1. Learning to Code is a marathon, not a sprint.

Some newbies tend to lose steam and get discouraged when they hit a bump in the road, usually in the form of a task or challenge they are unable to easily breeze past. Depending on your background prior to learning to code, it is not uncommon to often feel like you don’t know what you’re doing. It is perfectly okay to feel that way – almost everyone you know and admire today had those same feelings when they first started out. The goal is to not stop probing and trying to learn. The feeling will occur less often as you progress.

2. Git Right

One of the first things you will need to master is Git. Git is a version control tool that helps you manage and save versions of your code. It will literally save you from the far too common “Oh my, I have a deadline to meet and I’ve lost all my code” problem you may already have experienced or seen. Git allows you to manage multiple versions of your project so that you can retrieve and use any version you need at any time. If you want to stand out, you’ve got to learn how to wield Git effectively and save yourself from hassles. GitHub put together a handy guide that explains version control and Git you should check out.

3. Build (and Break) Things

Software engineering is a craft where you learn better and faster by building (and breaking) things. Don’t get caught up with only reading and clicking next on the tutorial videos that you don’t actively challenge yourself by taking on the exercises and code challenges. You will break stuff, and your code is not going to work a lot of the time at first – and that’s okay. You will learn better that way. Whatever programming language you’re learning, you should work on being able to build an app well enough with it. Be able to think through problems and solutions in that one language before trying to learn everything else.

4. Learn The Agile Method

If you’re going to be working in a team on projects that users need, being familiar with the Agile software development method will prove invaluable to you. Most software engineering teams use the Agile method to organize and manage their project workflows. The Agile method is a system that utilizes iterative workflows (also called sprints) to build software products. This process is iterative because it is reliant on collaboration and feedback from team members at various points of the project workflow to allow the team to manage and respond to unpredictable or unplanned events that occur along the way in a timely fashion. To learn more about Agile, see this short video that explains it pretty well.

There are a few other things that will help you stand out as a newbie software engineer. Soft skills like empathy, collaboration, communication, problem solving, the ability to take and apply feedback (constructive criticism) go a long way to helping you stand out from your peers. In this industry, technical skills are the baseline of entry – everyone who gets in will do it on meeting the technical requirement. But as you grow your technical skills, you have to work on growing your soft skills as well, so you can rise as high as you need to.

Finally, it is important to reiterate the need for you to hang in there and keep practicing. I’ll leave you with a quote by Malcolm Gladwell:

“Practice isn’t the thing you do once you’re good. It is the thing you do that makes you good.”

The post How To Stand Out As a Newbie Software Engineer appeared first on Andela.

How To Manage Communication as a Distributed Product Manager

$
0
0

With more companies embracing distributed and remote work teams, the communication challenge takes on a completely different form — product managers aren’t only expected to be able to communicate excellently, but be able to pass across a message to people with different levels of understanding of the subject matter, in different locations, in varying formats.

Take a quick look at any list of required skills for a product manager, and there’s absolutely no doubt that any PM worth their salt must not only be a rockstar at communicating, but also be able to do it in a systematic (repeatable, predictable) manner. We can go into the many reasons why communication is vital, but we’ll instead focus on the added requirements of communicating in a remote or distributed team.

There is no hard and fast rule, but there are some principles that stay true and consistent — distributed or not.

1. Show up prepared

This should go without saying, but with the rise and rise of crazy-busy product managers, it’s almost easy to go through life winging it. More often than not, the PM is expected to be the most knowledgeable person in the room — and as first impressions go, if you cannot provide a knowledgeable answer to a question, the likelihood that everything else you say after that will be taken seriously reduces significantly.

This is even more important in a distributed team because of the associated implicit barrier to access, and with it, a reduced audience attention span. You want to reduce the work required to pass across a vital piece of information by being certain of what you’re talking about upfront. It’s unlikely that you’ll have informed responses to a question if you are not prepared beforehand.

Read that email, scan through that Product Requirements Document, do a quick search about your customer’s business. Preparedness helps you grab and hold attention, thereby making for more impactful messaging.

2. Routine

This is simply having a regular, predictable and reusable schedule about most things and keeping an almost strict pattern of getting things done.

While most teams generally benefit from synchronous communication and schedules, in distributed teams, it becomes more important to have a strict regiment because you want your team or stakeholders to be able to plan with and around your schedule — regardless of timezone barriers.

Distributed teams thrive on structure, and having a routine helps to manage expectations of your stakeholders — internal or external. A routine not only reduces the burden on your stakeholders to always try to figure out things on their own, but it also helps you become a master at planning your own work.

Define a working pattern for team meetings, for product updates, for pairing sessions, for debriefs, and then have everyone be sufficiently aware of this pattern. But most importantly, do your best to stick to that routine.

3. Verbosity over TL;DR

Verbosity literally means having a lot of words. But in this context, it connotes using as many words as it takes for your audience to understand the message you are communicating.

Speaking to a person who is right in front of you is already difficult in itself when you don’t have the right words, but having to communicate remotely with a person in a different location requires that you do not leave any room for ambiguity. Ambiguity births confusion, confusion births rework, rework births scope creep or misaligned expectations, which in turn births failure.

But there’s a caveat here, which is to know your audience — concise is not always equal to short. As much as possible, always opt for clarity over anything else.

This is a game of balance — if the answer to the question “will this provide everything the recipient needs to be sufficiently informed about the subject matter” is no, then you should take a second look. Repeat as often as it takes to achieve that clarity.

4. Share Early; Share Often

Most successful projects rely on effective collaboration, and this is predicated on getting the right amount of information shared with collaborators early enough, and as often as things change.

Again, because there is already a barrier to access when you are not physically co-located, the goal as a distributed PM is to reduce the time and effort it takes to build consensus.

The more people know, the easier it becomes to build consensus. It may be uncomfortable — actually, it is uncomfortable to share when you don’t have all the answers, but the fact that people will feel valued when they are kept in the know trumps most feelings of discomfort. It helps to use the 5% -30%-90% feedback methodology here.

There may be an increasing level of comfort with distributed and remote work across many industries, and communication remains the cornerstone in being successful as a product manager. A product manager may be great at their job while they are co-located with their engineers and other stakeholders, but there are additional nuances required to achieve the same level of success in a distributed team; effective communication is one of the most important.

The post How To Manage Communication as a Distributed Product Manager appeared first on Andela.

RubyConf Kenya 2019: My Nairuby Recap

$
0
0

Two weeks ago was the first time I attended a Ruby conference outside my home country, and boy was it an experience. This is my recap post on my experience at the conference.

Nairobi, Capital of Kenya, is a nice, calm city. Albeit being calm, East Africans see Nairobi as the fastest-paced city in the region (Coming from Lagos, you can imagine the astonishment I felt but tried, unsuccessfully, to conceal as I thought “…oh friend if only you knew.”). Nairobi is the perfect place to host Nairuby (no pun intended). Being an international conference, the city provided the perfect weather and the perfect city cadence for tech minds to convene and geek out together. There is more to Nairobi, but let’s leave this here.

Here’s how the conference panned out:

Conference Day 1

The first talk I attended was Vishal Chandnani’s talk on how to Debug Hard. Vishal spoke about how some errors we face while programming may not be a fault of the framework or language, but with our inputs. He demonstrated it was possible to build the Ruby binary locally. He used this to debug the Ruby internals.

The most interesting talk for me on Day 1 was by Denis Sellu on Serverless. Dennis is a coconut connoisseur and sees the world in terms of coconuts. The same way coconuts are not nuts, Serverless application may not need servers. He demoed his tool, Ruby Lambda, which automates the admin stuff out of your way and enables you to build Ruby applications as serverless lambda functions.

Conference Day 2

In most conferences and events, the organizers usually front-load the conference, with the most interesting talks and presentations happening on the first day. This was not the case with Nairuby; the talks on the second day were just as interesting as the first.

Prathamesh Sonpatki started the day with How To Handle Assets in Rails 6. The focus was on Webpack and how it solves many issues with asset management. I learned here that Webpack could be a replacement for the asset pipeline, but this convention is not mainstream yet. Currently, there usually is a mix between Webpack and Sprockets.

Amr Abdelwahab presented the fantastic research he has been conducting on Ruby, the community, and the hype around it. Amr properly deconstructed the hype surrounding Ruby. I learned about highly performant Rubies in this talk. With a question, I also generated the hypothesis that the fastest Rubies may be those which are run on a VM: JRuby on JVM, TruffleRuby on GraalVM, and Elixir on BEAM. Amr emphasized the impact of community, and the benefits of inclusivity in building sustainable software.

Stella Maris spoke on Application Security. Security is a segment of application development often overlooked by developers. Stella reminds us that “good software development skills do not ensure good security skills.”

Conference Day 3

On Day3, I  suppose I learned – to a greater degree – the importance of community in the developer ecosystem. The Rails Girls, NBO led a session for beginners on introducing them to Ruby and Rails. Every experienced engineer was assigned a few beginners to teach. I led two other developers in developing a rudimentary IRB. Little projects like these are enough to spark interest in the minds of people who are new to the language. At the end of the day, I gave a lightning talk on one of the methods I adopt in learning: the Shu Ha Ri method.

Conclusion

Community is an integral part of software development. One may grow independently by studying the language in isolation, and may actually go on to build very amazing tools and software. But by being part of an active community, a software engineer’s impact is amplified exponentially. My advice to every engineer is to find a community to be actively involved in. This would provide you with an outlet to share your skills, and the necessary drive to get better at your craft.

Another thing I took away from this experience is the value of mentorship. Learning with a mentor — preferably someone who has had experience working with the skill you plan on learning — typically helps you learn faster and better than the toeing the path of the lone wolf. I was humbled and excited to see experienced engineers sit with absolute beginners and patiently walk them through the basics of Ruby and Rails.

Seeing the community at NaiRuby and Rails Girls NBO, I am inspired to bring such community into Lagos and West Africa as a whole. Watch this space for communications around that.

The post RubyConf Kenya 2019: My Nairuby Recap appeared first on Andela.

On Designing Good Microservices Architectures

$
0
0

It may be difficult to know exactly what constitutes a well-designed microservice on your first assignment working with microservices. Many teams have fallen into the same trap of making their microservice too small or too tightly coupled.

In this article, we will talk about the characteristics of a well-designed microservice and provide additional guidance to follow to overcome any common mistakes.

Characteristics of a Well-Designed Microservice

Don’t fall into the trap of using arbitrary rules like “you should have x lines of code” or “turn your function into a microservice”, etc. These rules often prove insufficient when it comes to determining the boundaries for your microservices.

If you’ve read about microservices, you’ve no doubt come across advice on what makes a well-designed service. Simply put, high cohesion and loose coupling. While it’s sound advice, these concepts are quite abstract.

So how can you set the right boundaries for your microservices?

I will guide you step by step, so let’s talk first about the true characteristics of microservices:

1: A well-designed service doesn’t share database tables with another service

When it comes to designing a microservice if you have multiple services referencing the same table, that’s a red flag, as it likely means your DB is a source of coupling.

Each service should rely on its own set of underlying data stores. This allows us to centralize access controls, audit logging, caching logic, etc.

2: A well-designed service has a minimal amount of database tables

The ideal size of a microservice is to be small enough, but no smaller. And the same goes for the number of database tables per service, “one or two database tables for a service.”

An example of an ideal microservice is one that handles and keeps track of millions and billions of entries around suppressions, but it’s all very focused just around suppression so there’s really only one or two tables there.

3: A well-designed service is thoughtfully stateful or stateless

Always ask yourself whether your service requires access to a database or if it’s going to be a stateless service processing terabytes of data like emails or logs. Be clear about this upfront and it will lead to a better-designed service.

4: A well-designed service’s data availability needs are accounted for

When designing a microservice, you need to keep in mind what services will rely on this new service and what’s the system-wide impact if that data becomes unavailable. Taking that into account allows you properly design data backup and recovery systems for this service.

Here are some keys to maintaining a high level of data availability:

– Have a plan – include RPO (recovery point objective) and RTO (recovery time objective) targets that define, respectively, exactly which data must be restored.

– Employ redundancy – Having backup copies of your data ensures that you won’t result in permanent loss of information.

– Eliminate single points of failure.

– Take advantage of virtualization.

5: A well-designed service is a single source of truth

Always keep in mind whenever you design a service to be the single source of truth (SSOT) for something in your system. Deployment of an SSOT architecture is becoming increasingly important in enterprise settings as a way of greatly minimizing the risk of retrieval of outdated, and therefore incorrect, information. A common example would be the electronic health record, where it is imperative to accurately validate patient identity against a single referential repository, which serves as the SSOT.

The benefits of getting to a single source of truth, of course, are enormous.

1. It can catapult an organization to become one that is truly data-driven.

2. The quality of data increases, fewer mistakes in communication are made.

3. Costs of errors are decreased.

After applying the characteristics, what to look at?

Once you’ve applied the above characteristics, you should take a step back and determine whether the service you’ve created is too small or not properly defined.

During the testing and implementation phase of your microservice system, there are a number of indicators to keep in mind:

  • Look out for is any over-reliance between services. If two services are constantly calling back to one another, then that’s a strong indication of coupling and a signal that they might be better off combined into one service.
  • The overhead of setting up the service outweighs the benefit of having it be independent. There’s a huge foundation of things that have to exist in order for an app to just run. For instance, you need to have a standard procedure to run when things break and how you can properly handle it.

What was the impact of microservices on a company like Amazon?

Amazon is a perfect example of a large organization with multiple teams. Jeff Bezos issued a mandate to all employees informing them that every team within the company had to communicate via API. Anyone who didn’t would be fired.

This way, all the data, and functionality was exposed through the interface. Bezos also managed to get every team to decouple, define what their resources are, and make them available through the API. Amazon was building a system from the ground up. This allows every team within the company to become a partner of one another.

Example of a successful microservice:

Let’s consider this project  with the Following structure that we will create:

Let’s break it down :

  • StudentController.java – Rest controller exposing all the three service methods discussed above.
  • Course.java, Student.java, StudentService.java – Business Logic for the application. StudentService exposes a couple of methods we would consume from our Rest Controller.
  • StudentControllerIT.java – Integration Tests for the Rest Services.
  • StudentControllerTest.java – Unit Tests for the Rest Services.
  • StudentServicesApplication.java – Launcher for the Spring Boot Application. To run the application, just launch this file as Java Application.
  • pom.xml – It contains all the dependencies needed to build this project. We will use the Spring Boot Starter Web.

Then we will create a REST service with Spring Initializr. We will use Spring Web MVC as our web framework. (Spring Initializr http://start.spring.io/ is a great tool to bootstrap your Spring Boot projects.)

Now we need to implement Business Service for the application. All applications need data, but instead of talking to a real database, we will use an ArrayList – a type of in-memory data store.

A student can take multiple courses. A course has an id, name, description and a list of steps you need to complete to finish the course. A student has an id, name, description and a list of courses he/she is currently registered for. We have StudentService exposing methods to:

  • public List<Student> retrieveAllStudents() – Retrieve details for all students
  • public Student retrieveStudent(String studentId) – Retrieve a specific student details
  • public List<Course> retrieveCourses(String studentId) – Retrieve all courses a student is registered for
  • public Course retrieveCourse(String studentId, String courseId) – Retrieve details of a specific course a student is registered for
  • public Course addCourse(String studentId, Course course) – Add a course to an existing student

Then we will add a couple of GET Rest Services

  • The Rest Service StudentController exposes a couple of Get-Services.
  • @Autowired private StudentService studentService : We are using Spring Autowiring to wire the student service into the StudentController.
  • @GetMapping(“/students/{studentId}/courses”): Exposing a GET Service with studentId as a path variable
  • @GetMapping(“/students/{studentId}/courses/{courseId}”): Exposing a GET Service for retrieving a specific course of a student.
  • @PathVariable String studentId: Value of studentId from the URL will be mapped to this parameter.

Let’s execute the Get-Service Using Postman, we will fire a request to http://localhost:8080/students/Student1/courses/Course1 to test the service. The response is as shown below.

{

"id": "Course1",

"name": "Spring",

"description": "10 Steps",

"steps": [

"Learn Maven",

"Import Project",

"First Example",

"Second Example"

]

}

You can use a tool like Postman to execute this service :

Adding a POST Rest Service: a POST Service should return a status of created (201) when the resource creation is successful.

@PostMapping(“/students/{studentId}/courses”): Mapping a URL for the POST Request

@RequestBody Course newCourse: Using Binding to bind the body of the request to Course object. ResponseEntity.created(location).build(): Return a status of created. Also, return the location of created resources as a Response Header.

Let’s execute a POST Rest Service, it contains all the details to register a course to a student:

{

"name": "Microservices",

"description": "10 Steps",

"steps": [

"Learn How to Break Things Up",

"Automate the hell out of everything",

"Have fun"

]

}

The URL we use is http://localhost:8080/students/Student1/courses

Recap

Designing microservices can often feel more like an art than a science, and a lot of the advice out there is fairly abstract, leading to confusing discussions.

The microservice architecture enables the rapid, frequent and reliable delivery of large, complex applications. It also enables an organization to evolve its technology stack.

If you have additional questions, feel free to leave comments under this post.

The post On Designing Good Microservices Architectures appeared first on Andela.

Introducing The Andela Talent Marketplace

$
0
0

Andela set out as a company to advance human potential by investing heavily in building technology leaders for the future, and we have continued to record tremendous success. Our work with the Andela Learning Community is proof – in collaboration with partners like Udacity, Google and PluralSight, we have trained over 30,000 learners in Mobile & Web development, Google Cloud Technology, etc, as we march on towards our bold mission of training 100,000 software engineers across Africa in 10 years.

We recently launched the talent marketplace for graduates of the ALC program to enable us to continue providing opportunities for them by connecting them with employers of labour in their local ecosystems. We are running a pilot program in Nigeria in collaboration with Stutern, an already existing Talent Marketplace with experience in matching graduates with employers in this market. 

 

Why We’re Building This 

We know how difficult it is to find and hire good talent. Often, it’s not for a lack of talent that the hiring process is tedious, but because recruiters often don’t have a go-to place to select said talent from. This is one of the problems the Andela Talent Marketplace will solve. We’re aggregating a pool of technical talent who have completed the ALC program. Interested employers and recruiters will be able to choose and hire talent from that pool. 

We also believe it will afford ALC graduates the opportunity to move farther along in their journeys to being technology leaders, alongside growing the local tech ecosystem. 

 

How it Works

Are you an employer looking to hire entry-level software engineers (or you’re filling another entry-level technical role)? If you would like to save time on finding suitable talent for your company, you can request and get granted access to the ALC graduates on our Talent Marketplace by filling out this form. Someone from the team will get in touch with you afterward to help with the next steps. 

 

Look out for further communications on this program on our social channels or on this blog.

The post Introducing The Andela Talent Marketplace appeared first on Andela.

Finding Answers: Benedicte Musabimana’s Dev Journey

$
0
0

When I was a kid, I always wanted to know how a computer or a mobile phone worked.”

Benedicte is a software engineer at Andela Kigali. Like many people, her first time on a computer was in junior high school. She was hooked from then on, and thanks to her school teacher, she learned there were other interesting things one could do with the machine – besides Microsoft Word and Excel. Her curiosity led her to major in Computer Science in university and set her on the path to becoming a software engineer today. But her journey wasn’t an easy one.

“The journey was very difficult for me. Everything was very new for me because my background was not related to computers. But I knew that I was learning what I like, so I pushed harder to make it.” She learned and worked at it until she got good at it.

Benedicte is a React Js, Node Js & Express engineer. When asked what she liked most about being a software engineer, she said “It is always exciting to learn a new stack, and I get motivated after delivering a product and it is approved. I am also looking forward to working on projects that I know will change people’s lives in a positive way.”

The post Finding Answers: Benedicte Musabimana’s Dev Journey appeared first on Andela.

Ebook: Getting Started with Managing Distributed Teams


Evolving HR Through Design Thinking

$
0
0

Growing up as a child, a lot of things intrigued me. Top on the list was food, my special stones (aka pebbles), and caring for people as a nurse; even while I wasn’t a fan of medication and injections. I think the one thing that made my siblings laugh was the fact that I remembered longer sentences over simple ones. For example, remembering “….the difference is clear” rather than simply “7-up” was something I was endlessly teased about as a child.

It was fun to explore and understand how the universe worked all together. So it was only natural for me to grow and learn within the world of natural sciences. Isn’t it amazing how simple fundamental laws of science can be applied to help us understand some basic concepts of our day-to-day lives? Simply using Newton’s second law of motion, we know that Fma. (Where F = Force, = Mass and a = Acceleration). In order to find a, the formula changes to a = F/m. We can, therefore, make an inference and say that the rate at which an individual excels “accelerates” in life is largely dependent on their will “inert force” to succeed as well as their ability to gather the necessary skills “mass” to enhance their craft.

I believe that’s how technology has metamorphosed over time. Our need to make our day-to-day lives better and efficient has brought about innovative technological solutions that we’ve come to know today. I’m really delighted to see a lot of these technological innovations taking place in Africa that has influenced multiple sectors such as banking (M-Pesa in Kenya), entertainment (IrokoTV in Nigeria), agriculture (SysComp in Ghana, Pula in Kenya), healthcare (GiftedMom in Cameroon, MedTrucks in Morocco), energy (Simusolar in Tanzania), etc.

These innovations help us grow internally as people, while simultaneously highlighting the potential that lies within the African continent for investors. Like Emmanuel Delaveau (General Partner at Partech) puts it, “This digitalization of traditional areas of the economy is very intriguing. We consider Africa to be one of the most exciting opportunities for the years to come”. We’ve also seen the emergence of multiple tech hubs that are helping to support businesses in Africa grow and to create avenues where young people can build technology products as it evolves. Martin Heidegger summarizes the essence of Technology to it being a way that we encounter entities generally, including nature, ourselves and everything around us. It goes beyond something that we make, it’s a state of mind that transforms over time.

This is what fascinates me about working in a technology company, especially as a Human Resource professional. Working within the tech space exposes you to multiple opportunities to make day-to-day processes easier and more efficient for both our clients as well as our employees. Before we walk away thinking not everyone is tech-savvy, the beauty of technology is that it comes in various forms. Simply leveraging Google Forms, Excel, Google Sheets, etc, does wonders to our turn-around time and service delivery!

The world is constantly changing, which in turn also affects work and office culture. There’s never been a better time for us in HR to be prepared for the future of work than now. We need to be more creative in the way we proffer solutions to the problems that come up over time. One way to better understand the concept of creative problem solving is through the principle of design thinking.

Photo by Markus Spiske on Unsplash

What is Design Thinking?

The Design Thinking concept has been around for a while and has gained popularity in the past decade. It initially became prominent in the mid-20th century as Designers began using it to think about how to enhance the end user’s experience. By the 1960s, more people who were designing homes, consumer goods and technology started using this concept more by making the users the center of their design process flow.

In essence, Design Thinking is a creative, solution-based approach to problem-solving which has the user/people at the center of its process flow. One of the most recent trends is how Design Thinking can be used as a tool to understand and improve the customer experience of HR. It can be used in all aspects of HR; from sourcing, attracting and identifying potential talent to retaining and developing key people into greater assets for any organization. As HR Leader and Coach, ‘Lara Yeku puts it; “HR needs a “human-centered” approach to respond and transform our talent challenges into “possible employee experience”.”

This might seem like an uphill task but the beauty of this process is that it is based on your current state and helps you plan for future work. Most importantly, being innovative is a skill that can be nurtured. So we get better with practice and experience.

To make this easier, we’ll walk through the key phases of Design Thinking and run through a basic example for us to get a better understanding of how to apply this process to solve problems in creative and innovative ways.

1. Understand:

  • Empathize: This is the first stage of design thinking which starts with observing, engaging and understanding the needs of the people involved in the problem-solving process. It helps you build a connection with the people (“end-user”) on a psychological and emotional level. Here, the basic thing involved is listening to understand (without any bias or assumptions) and not to respond. Understand the different personas involved.
  • Define: Within this stage, we try to find a way to identify the problem statement. Here, we work to make sense of the information that we gathered from the previous stage. Key things that we’ll want to note include: Are there any patterns that you can observe? What are the blockers or difficulties that our end users are facing? What is the main problem that we need to solve?

2. Explore:

  • Ideate: After we’ve listened to our end-users and identified what the problem statements might be, we then brainstorm to identify possible creative solutions to the problem. At the end of this stage, we should be able to identify possible ideas to explore. There are no limits to the number of creative ideas/ solutions that we may come up with.
  • Prototype: In this phase, the solution can be enhanced, revamped, established or rejected. Our goal is to validate the solutions and ideas which have been identified.

3. Materialize:

  • Test: Share these prototypes or pilot the possible scenarios with the end-users to try out. The goal here is to get feedback in order to help reframe the identified solutions to make them better. This could also mean going back to the prototyping or all the way back to the empathize stage.
  • Implement: This phase is the full-blown roll-out stage where all the end users get to use the product or work within the new process that has been drawn up. Don’t stop here, for this is a continuous process flow that could give rise to more problems which will need us to find creative solutions to help resolve.
Photo by Hans-Peter Gauster on Unsplash

This might seem very academic so let us walk through an example to help us understand how we can use Design Thinking:

Aminu is an HR Operations Officer within a Technology company. Employees have reached out to indicate that they do not fully understand the company’s leave policy and what it entails. Thus, they are unable to effectively scheduling their annual leave days. Aminu has just learned about how Design Thinking works and seeks to explore how to resolve these concerns that have been shared and does the following:

1. Empathize — Aminu works with the members of his team to conduct a focus group session with key members of each department. Within this forum, Aminu and the HR team are able to listen and itemize the identified problems.

2. Define — Aminu and his colleague come together to define the key problem statements.

  • Employees need a place where all the policies are stored and easily accessible to all without the need to contact HR.
  • Employees would like the policy to be succinct and updated to meet the current realities of the business.

3. Ideate — The team brainstorms to identify the following solutions:

  • Have a list of all the company’s policies on Google drive and link them to a sheet or tracker that all employees can access;
  • Create a simple intranet which will house all the policies;
  • Send out monthly reminders to all employees on where to access and view all the policies;
  • Review the policy to make the information clear and direct. Highlight the key information that is necessary for employees to understand how the leave days are allocated and how they can utilize them. Move all the additional/ nice to have information into an appendix section;
  • Include a table of content, appropriate headers, and section dividers to help make readability easier for the end-user;
  • Carry out a road-show that uses jingles and a short video that highlights the key aspects of the leave policy.

4. Prototype — Aminu then works with his team members as well as the members of Technology and Internal Communications teams to explore potential technology applications that can be used to make this process more efficient and the language of the policy more “reader-friendly” for the employees.

5. Test — Once a new draft policy has been created and possible technology tools have been identified by Aminu and his team, it is shared with a controlled group of employees to review and provide feedback.

6. Implement — After the feedback has been obtained and necessary changes have been made to the policy and the mode of accessibility, this is rolled out to all employees. This could bring about more issues as well as great wins for the team. So we’ll have to start again from the empathize phase to see how to tackle the next amazing challenge ahead!

Now that we’ve helped Aminu and his team save the day as HR heroes do, it’s time for us to reflect and explore ways to enhance the customer experience for our employees, clients, and stakeholders.

A recent article published by Forbes indicates that the future of work will evolve in 5 ways; positions will be fluid, work-force would be decentralized, motivation to work will be more than paychecks, lifelong learning will be desired and technology will augment human’s jobs.

In view of this article, it’s clear that Design Thinking will be a very powerful toolkit that takes a more customer/people-centric approach to effective service delivery, and focuses on creating exceptional customer and employee experience.

The post Evolving HR Through Design Thinking appeared first on Andela.

Building Your Own Version of React From Scratch (Part 1)

$
0
0

Introduction

React is a huge codebase and knowing where to start looking can be confusing. You may find files like this one – which is 3000 lines long – and you may decide to look the other way and keep building awesome UI’s with the library. This has been me for the most part up until recently when I decided that I was going to take a deep dive at looking at the codebase and other Virtual Dom implementations like PreactJS

We won’t cover all the features that React brings to the table. I still don’t understand everything about the codebase either, but we will look at some of the most important details that can help you understand React better, see what makes React 16+ faster than React 15 and as a bonus, you will learn 1 way of how NPM packages are created & bundled with Rollup.

Setup & Basic Overview

Create a folder with a name for your React implementation. If you want to publish it on NPM then make sure there’s no Js library with the name you have in mind. I’ll name mine tevreact .

Once we are done building we will be able to have a JSX powered React clone that reads from state and props as shown in the screenshot below.

A demo app that reads from local state and props.

JSX

JSX is XML like syntax that is written inside Javascript files and is transpiled to function calls which in turn is rendered to either the Dom thanks to React-Dom or React Native views

If you wrote

how you write your React code

You get

JSX transpiled to function calls.

createElement takes a node-typeprops and finally children. If either children or props are missing we get null in the fields. Lastly, createElement returns an object {type, props} which is then used to render to the dom. The above calls will produce

for React’s case, it has other properties such as key, ref, etc, we will only be interested in type and props for now.

Let’s build our own createElement function

We’ll do things a bit differently in our implementation, we want all our objects to have the same shape. React’s children can either be an array of objects or a string. For our case, we want the children to be objects. Which means the h1 children props will be

Inside the src folder create 2 files element.js && tevreact.js(what you called your React clone .js).Inside your element.js is where we will have our createElement function.

const TEXT_ELEMENT = "TEXT";

/**
 * @param {string} type - the node type
 * @param {?object} configObject - the props
 * @param  {?...any} args - the children array
 * @returns {object} - to be called by tevreact.render
 */
export function createElement(type, configObject, ...args) {
  const props = Object.assign({}, configObject);
  const hasChildren = args.length > 0;
  const nodeChildren = hasChildren ? [...args] : [];
  props.children = nodeChildren
    .filter(Boolean)
    .map(c => (c instanceof Object ? c : createTextElement(c)));

  return { type, props };
}

/**
 * @param {string} nodeValue - the text of the node
 * @returns {object} - a call to createElement
 */
function createTextElement(nodeValue) {
  return createElement(TEXT_ELEMENT, { nodeValue, children: [] });
}

Moving on to the render phase

You’ve probably seen the render function from React-Dom

const root = document.getElementById("root")
ReactDom.render(<Component />, root)

The render function is responsible for rendering to the real dom. It takes an element and a parentNode that the element will be appended to after being created. We still haven’t added support for custom JSX tags but we are getting there. Create 2 new files reconciler.js & dom-utils.js the renderwill be in the reconciler.js file. Firstly export the TEXT_ELEMENT in the element.js to be export const TEXT_ELEMENT = "TEXT";

The render method checks to see if the element is a string, if it is, a text node is created, if not an instance of the HTML element specified is created.

We will define updateDomProperties in a bit but it essentially takes the props provided and appends them to the element, the same process is repeated for the element’s children. The render function is called recursively on each child.

In dom-utils.js create the updateDomProperties function and make sure it is exported.

import { TEXT_ELEMENT } from "./element";

/**
 * @param {HTMLElement} dom - the html element where props get applied to
 * @param {object} props - consists of both attributes and event listeners.
 */
export function updateDomProperties(dom, props) {
  const isListener = name => name.startsWith("on");
  Object.keys(props)
    .filter(isListener)
    .forEach(name => {
      const eventType = name.toLowerCase().substring(2);
      dom.addEventListener(eventType, props[name]);
    });

  const isAttribute = name => !isListener(name) && name !== "children";
  Object.keys(props)
    .filter(isAttribute)
    .forEach(name => {
      dom[name] = props[name];
    });
}

If the name starts with on like onClick, onSubmit then its an event listener and we need to add it as such via dom.addEventListener. the rest of the props just applied as attributes to the element.

Testing time

Let’s see what we have come up with up until this point.

Export functions to the tevreact.js file

import { render } from "./reconciler";
import { createElement } from "./element";

export { createElement, render };

export default {
  render,
  createElement
};
view raw

Install rollup & setup npm scripts

$ yarn init -y // if you have yarn installed
$ npm init -y // if you have npm
// install rollup
$ yarn add rollup —-dev
$ npm install rollup —-save-dev

Setup the NPM scripts in package.json replace yarn with npm run if you don’t have yarn installed. Replace the tevreact.js to the file name you chose.

"scripts": {
    "build:module": "rollup src/tevreact.js -f es --exports named -n tevreact -o dist/tevreact.es.js",
    "build:main": "rollup src/tevreact.js -f umd --exports named -n tevreact -o dist/tevreact.umd.js",
    "build:all": "yarn build:module && yarn build:main",
    "prepublishOnly": "yarn build:all" // this command is run automatically before publishing to npm
  },

Example repo

Run npm run build:all

Create a folder named examples in the root of the directory and a basic index.html file.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
    <title>Didact</title>
  </head>
  <body>
    <div id="app"></div>
    <!-- we need the babel standalone transpiler here since this is just a basic html page -->
    <script src="https://unpkg.com/babel-standalone@7.0.0-beta.3/babel.min.js"></script>
    <!-- load the umd version because it sets global.tevreact -->
    <script src="../dist/tevreact.umd.js"></script>
    <script type="text/jsx">
      /** @jsx tevreact.createElement */
      /** In the comment above we are telling babel which function it should
      use the default is React.createElement and we want to use
      our own createElement function*/
      const appElement = (
          <div>
              <h1>Hello Tev, Have you watched John Wick</h1>
          </div>
      )
      tevreact.render(appElement, document.getElementById("app"));
    </script>
  </body>
</html>

replace tevreact with you went with. We’ll have a full example bundled with Webpack in the end

It works, You should be seeing something similar.

Updating the rendered JSX

If you’ve got time, try running the snippet below in a demo React app. We won’t have 2 app instances rendered on the dom but with our current implementation, our render method will render twice since it doesn’t know how to perform an update.

const rootElement = document.getElementById("root")
ReactDom.render(<Component />, rootElement) // renders for the first time
ReactDom.render(<Component />, rootElement) // does the app component render twice
We won’t have 2 app components rendered on the screen.

A duplicate call to render in your example index.html will not update the div but will append a new one.

It’s a bug that we will fix below.

In order to perform an update, we need to have a copy of the tree that has been rendered to the screen and the new tree with the updates so that we can make a comparison. We can do this by creating an object that we will call an instance.

const instance = {
  dom: HTMLElement, // the rendered dom element
  element: {type: String, props: object}, 
  childInstances: Array<instance> // array of child instances
 }

If the previous instance is null eg:(on initial render) we will create a new node

If the element.type of the previous instance is the same as the type of new instance, all we will do is just update the props of the element,

lastly, for now, If the type of the prev instance is not the same as the type of the new instance, we will replace the prev with the new instance

The above process is called reconciliation. It aims at reusing the dom nodes present as much as possible. Now that you have a grasp on the logic, keep in mind that we need to iterate the same process for the childInstances

Enough talk lets code.

We need a function inside reconciler.js that creates a new instance. It returns the instance object after an element is passed as an argument. We also need a reconcile function that will perform the reconciliation process described above. This will result in the render method offloading its functionality.

import { updateDomProperties } from "./dom-utils";
import { TEXT_ELEMENT } from "./element";

let rootInstance = null; // will keep the reference to the instance rendered on the dom

export function render(element, parentDom) {
  const prevInstance = rootInstance;
  const nextInstance = reconcile(parentDom, prevInstance, element);
  rootInstance = nextInstance;
}

function reconcile(parentDom, instance, element) {
  if (instance == null) {
    // initial render
    const newInstance = instantiate(element);
    parentDom.appendChild(newInstance.dom);
    return newInstance;
  } else if (element == null) {
    /**
     * this section gets hit when
     * a childElement was previously present
     * but in the new element is not present
     * for instance a todo item that has been deleted
     * it was present at first but is now not present
     */
    parentDom.removeChild(instance.dom);
    return null;
  } else if (instance.element.type === element.type) {
    /**
     * if the types are the same
     * eg: if prevType was "input" and current type is still "input"
     * NB:// we still havent updated
     * the props of the node rendered in the dom
     */
    instance.childInstances = reconcileChildren(instance, element);
    instance.element = element;
    return instance;
  } else {
    /**
     * if the type of the previous Instance is not the
     * same as the type of the new element
     * we replace the old with the new.
     * eg: if we had an "input" and now have "button"
     * we get rid of the input and replace it with the button
     */
    const newInstance = instantiate(element);
    parentDom.replaceChild(newInstance.dom, instance.dom);
    return newInstance;
  }
}

function instantiate(element) {
  const { type, props } = element;

  const isTextElement = type === TEXT_ELEMENT;
  const dom = isTextElement
    ? document.createTextNode("")
    : document.createElement(type);

  updateDomProperties(dom, props);

  // Instantiate and append children
  const childElements = props.children || [];
  // we are recursively calling instanciate on each
  // child element
  const childInstances = childElements.map(instantiate);
  const childDoms = childInstances.map(childInstance => childInstance.dom);
  childDoms.forEach(childDom => dom.appendChild(childDom));

  const instance = { dom, element, childInstances };
  return instance;
}

function reconcileChildren(instance, element) {
  const dom = instance.dom;
  const childInstances = instance.childInstances;
  const nextChildElements = element.props.children || [];
  const newChildInstances = [];
  const count = Math.max(childInstances.length, nextChildElements.length); 
  
  for (let i = 0; i < count; i++) {
    const childInstance = childInstances[i];
    const childElement = nextChildElements[i];
    // the reconcile function has logic setup to handle the scenario when either 
    // the child instance or the childElement is null
    const newChildInstance = reconcile(dom, childInstance, childElement);
    newChildInstances.push(newChildInstance);
  }

  return newChildInstances.filter(instance => instance != null);
}

We also need to update the updateDomProperties function to remove the oldProps and apply the newProps

export function updateDomProperties(dom, prevProps, nextProps) {
  const isEvent = name => name.startsWith("on");
  const isAttribute = name => !isEvent(name) && name != "children";

  // Remove event listeners
  Object.keys(prevProps)
    .filter(isEvent)
    .forEach(name => {
      const eventType = name.toLowerCase().substring(2);
      dom.removeEventListener(eventType, prevProps[name]);
    });

  // Remove attributes
  Object.keys(prevProps)
    .filter(isAttribute)
    .forEach(name => {
      dom[name] = null;
    });

  // Set new attributes
  Object.keys(nextProps)
    .filter(isAttribute)
    .forEach(name => {
      dom[name] = nextProps[name];
    });
    
  // Set new eventListeners
  Object.keys(nextProps)
    .filter(isEvent)
    .forEach(name => {
      const eventType = name.toLowerCase().substring(2);
      dom.addEventListener(eventType, nextProps[name]);
    });
}

Let’s update the functions that call updateDomProperties to provide the previous props and next props.

function reconcile(parentDom, instance, element) {
   /** code... */
  else if (instance.element.type === element.type) {
    // perform props update here
    updateDomProperties(instance.dom, instance.element.props, element.props);
    instance.childInstances = reconcileChildren(instance, element);
    instance.element = element;
    return instance;
  } 
     /** code... */
}

function instantiate(element) {
  const { type, props } = element;

  /** code... */
  const dom = isTextElement
    ? document.createTextNode("")
    : document.createElement(type);
   // apply new props and provide empty object as a prevObject since this is instanciation
  updateDomProperties(dom, {}, props);
   /** code... */
  
}

Build your clone again npm run build:all and reload your example app again with the 2 render calls still present.

We now have 1 app instance.

Classes and Custom JSX tags

We’ll look at class Components and custom JSX tags, we won’t be covering lifecycle methods in this tutorial.

We can do an optimization from here on out, in the previous examples, reconciliation happened for the entire virtual dom tree. With the introduction of classes, we will make reconciliation happen only for the component whose state has changed. Let’s get to it. Create a component.jsfile in src

export class Component {
  constructor(props) {
    this.props = props;
    this.state = this.state || {};
  }
  setState(partialState) {
    this.state = Object.assign({}, this.state, partialState);
  }
}

We need the component to maintain its own internal instance so that reconciliation can happen for this component alone.

// ...code

function createPublicInstance(element, internalInstance) {
  const { type, props } = element; 
  const publicInstance = new type(props); // the type is a class so we use the *new* keyword
  publicInstance.__internalInstance = internalInstance;
  return publicInstance;
}

A change needs to be made to the instantiate function, it needs to call createPublicInstance if the type is a class.

function instantiate(element) {
  const { type, props } = element;
  const isDomElement = typeof type === "string";

  if (isDomElement) {
    // Instantiate DOM element
    const isTextElement = type === TEXT_ELEMENT;
    const dom = isTextElement
      ? document.createTextNode("")
      : document.createElement(type);

    updateDomProperties(dom, [], props);

    const childElements = props.children || [];
    const childInstances = childElements.map(instantiate);
    const childDoms = childInstances.map(childInstance => childInstance.dom);
    childDoms.forEach(childDom => dom.appendChild(childDom));

    const instance = { dom, element, childInstances };
    return instance;
  } else {
    // Instantiate component element
    const instance = {};
    const publicInstance = createPublicInstance(element, instance);
    const childElement = publicInstance.render(); // each class has a render method
    // if render is called it returns the child
    const childInstance = instantiate(childElement);
    const dom = childInstance.dom;

    Object.assign(instance, { dom, element, childInstance, publicInstance });
    return instance;
  }
}

Updating a class component in our case will happen when a call to setStateis made.

import { reconcile } from "./reconciler"

export class Component {
  constructor(props) {
    this.props = props;
    this.state = this.state || {};
  }
  setState(partialState) {
    this.state = Object.assign({}, this.state, partialState);
    updateInstance(this.__internalInstance);
  }
}

function updateInstance(internalInstance) {
  const parentDom = internalInstance.dom.parentNode;
  const element = internalInstance.element;
  reconcile(parentDom, internalInstance, element);
}

We now need to make sure that our reconcile function handles reconciliation for class components.

export function reconcile(parentDom, instance, element) {
  if (instance == null) {
    // initial render
    const newInstance = instantiate(element);
    parentDom.appendChild(newInstance.dom);
    return newInstance;
  } else if (element == null) {
    /**
     * this section gets hit when
     * a childElement was previously present
     * but in the new element is not present
     * for instance a todo item that has been deleted
     * it was present at first but is now not present
     */
    parentDom.removeChild(instance.dom);
    return null;
  } else if (instance.element.type !== element.type) {
    /**
     * if the type of the previous Instance is not the
     * same as the type of the new element
     * we replace the old with the new.
     * eg: if we had an "input" and now have "button"
     * we get rid of the input and replace it with the button
     */
    const newInstance = instantiate(element);
    parentDom.replaceChild(newInstance.dom, instance.dom);
    return newInstance;
  } else if (typeof element.type === "string") {
    /**
     * if the types are the same & are HTMLElement types
     * eg: if prevType was "input" and current type is still "input"
     * NB:// we still havent updated
     * the props of the node rendered in the dom
     */
    instance.childInstances = reconcileChildren(instance, element);
    instance.element = element;
    return instance;
  } else {
    //Update instance
    instance.publicInstance.props = element.props;
    const childElement = instance.publicInstance.render();
    const oldChildInstance = instance.childInstance;
    const childInstance = reconcile(parentDom, oldChildInstance, childElement);
    instance.dom = childInstance.dom;
    instance.childInstance = childInstance;
    instance.element = element;
    return instance;
  }
}

The end result means that reconciliation on class components starts at the parentNode of the child component and not the beginning of the v-domtree.

Import the Component class in tevreact just like the rest of the functions.

import { render } from "./reconciler";
import { createElement } from "./element";
import { Component } from "./component";
export { createElement, render, Component };

export default {
  render,
  createElement,
  Component
};

Testing time

Run npm run build:all & create another HTML file in the examples directory class.html

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
    <title>Didact</title>
  </head>
  <body>
    <div id="app"></div>
    <!-- we need the babel standalone transpiler here since this is just a basic html page -->
    <script src="https://unpkg.com/babel-standalone@7.0.0-beta.3/babel.min.js"></script>
    <!-- load the umd version because it sets global.tevreact -->
    <script src="../dist/tevreact.umd.js"></script>
    <!-- allow the react js preset -->
    <script type="text/babel" data-presets="react">
      /** @jsx tevreact.createElement */
      /** In the comment above we are telling babel which function it should
      use the default is React.createElement and we want to use
      our own createElement function*/
      class App extends tevreact.Component {
        constructor(props) {
          super(props);
          this.state = { movieName: "John Wick" };
        }
        render() {
          const { movieName } = this.state;
          const { userName } = this.props;
          return (
            <div>
              <h1>
                Hello {userName}, Have you watched {movieName}.
              </h1>
            </div>
          );
        }
      }
      tevreact.render(<App userName={"Tev"} />, document.getElementById("app"));
    </script>
  </body>
</html>

A class component reading from props and state 🎉

If you got rid of all the comments, we have a pretty decent library size of fewer than 300 lines. Hit npm publish and you’ll have your package on the NPM registry.

Even though this works, this is how React worked prior to having Fiber. In part 2 of this tutorial, we will work on integrating the Fiber reconciler which is what React 16 > is using at the time of writing this post.

The post Building Your Own Version of React From Scratch (Part 1) appeared first on Andela.

Building Your Own React From Scratch (Part 2)

$
0
0

Integrating Fiber.

In the first part of this tutorial, we built a react clone but it did not have React Fiber. This second part of the tutorial will focus on integrating Fiber without breaking any changes for components built using the old version of the library.

What we can learn from Facebook’s React architecture

Check this video out to try and get a better grasp of React Fiber and how we will be implementing the same in our React clone.

Up until this point, our implementation has been made up of recursive calls that perform DOM operations. See reconcile and reconcilechildren . Whenever we make a change in the v-dom we make the corresponding change to the DOM.

With the new architecture, changes to the DOM are made at once when all the updates have been made in a second tree called the work-in-progress tree. The 1st tree is called the current tree and its the tree that represents the nodes in the actual DOM. Once all updates are complete on the work-in-progress tree it gets rendered to the DOM and becomes the current tree.

A code pen illustration of the computation from Lin Clark’s talk. Building our own demo could not fit in the scope of this tutorial so we’ll use a pre-made React demo that toggles Fiber on and off

We’ll re-write the reconciliation algorithm again and this time we’ll use requestIdleCallback. Through this function, the browser lets us know through a callback function how much time is left till it has to perform other tasks in its backlog. Once it’s time for the browser to do other things, we simply pause traversing the tree and resume once the browser has no work to do.

This new re-implementation will rely mostly on looping and not recursion just to make sure that we make the React clone run faster.

The structure of a Fiber

//NB: ALL DOM nodes have their corresponding fibers in our new implementation
// most of this properties will make sense once we begin using them
let fiber = {
  tag: HOST_COMPONENT, // we can have either host of class component
  type: "input",
  parent: parentFiber, // the parentNode’s fiber
  child: childFiber, // the childNode’s fiber if it has any
  sibling: null, // the element that is in the same tree level as this input
  alternate: currentFiber, // the fiber that has been rendered on the dom. Will be null if its on initial render
  stateNode: document.createElement(“div”),
  props: { children: [], id: "image", type: "text"}, 
  effectTag: PLACEMENT,// can either be PLACEMENT | DELETION | UPDATe depending on the dom operation to be done
  effects: [] // this array will contain fibers of its childComponent
};
it’s just a javascript object.

Work Phases

We will have 3 work phases.

  1. The BeginWork phase will traverse the work in progress tree until it gets to the very last child as we apply the effectTag. (we’ll implement this so don’t worry about the details for now)
  2. CompleteWork phase traverses back up the tree until we get to the Fiber with no parent as we apply effects from child to parent until the parent at the top has all the effects of the tree. Effects are simply Fibers that have a tag that will inform the reconciler how to apply the Fiber to the DOM. We can have a tag for adding a node to the DOM, one for updating and another for removing the node.
  3. CommitWork will be responsible for making the DOM manipulations based on the effects array that was created in the CompleteWork phase

Now that we have a general understanding lets begin working on the individual pieces that have will be combined to form the 3 phases described above.

Enough talk lets code. First, clear out the reconciler file with the exception of the imports.

import { updateDomProperties } from "./dom-utils";
import { TEXT_ELEMENT } from "./element";
const ENOUGH_TIME = 1; // we set ours to 1 millisecond.

let workQueue = []; // there is no work initially
let nextUnitOfWork = null; // the nextUnitOfWork is null on initial render.

// the schedule function heere can stand 
// for the scheduleUpdate or the 
// call to render 
// both those calls update the workQueue with a new task.
function schedule(task) {
  // add the task to the workqueue. It will be worked on later.
  workQueue.push(task);
  // request to know when the browser will be pre-occupied.
  // if the browser doesn't support requestIdleCallback
  // react will pollyfill the function but for simplicities sake
  // ill assume your running this on an ever-green browser.
  requestIdleCallback(performWork);
}

function performWork(deadline) {
  loopThroughWork(deadline)
  if (nextUnitOfWork || workQueue.length > 0) {
    // if theres more work to be done. get to know when the browser will be occupied
    // and check if we can perform some work with the timing provided.
    requestIdleCallback(performWork);
  }
}

function loopThroughWork(deadline) {
    while (nextUnitOfWork && deadline.timeRemaining() > ENOUGH_TIME) {
    /**
     * perform unitofwork on a fiber if there's enough time to spare
     * from the browser's end.
     */
    nextUnitOfWork = performUnitOfWork(nextUnitOfWork);
  }
}

The schedule function simply updates the workQueue and a call to performWork is made. We don’t really need the schedule function and we’ll replace it in the end since its job will be done by a call to setState and renderit just stands as a placeholder to show you what the 2 functions will do. performWork simply loops through each item in the workQueue and this is how we beginWork

We’ll have a nextUnitOfWork and a performUnitOfWork function. The performUnitOfWork will work on the current-Fiber and return the nextUnitOfWork which will benext Fiber to be worked on.

An update to the workQueue needs to happen when we call setState or render

Let’s begin with making the setState function update the queue.

import { updateDomProperties } from "./dom-utils";
import { TEXT_ELEMENT } from "./element";
const ENOUGH_TIME = 1; // we set ours to 1 millisecond.

let workQueue = []; // there is no work initially
let nextUnitOfWork = null; // the nextUnitOfWork is null on initial render.

// the schedule function heere can stand 
// for the scheduleUpdate or the 
// call to render 
// both those calls update the workQueue with a new task.
function schedule(task) {
  // add the task to the workqueue. It will be worked on later.
  workQueue.push(task);
  // request to know when the browser will be pre-occupied.
  // if the browser doesn't support requestIdleCallback
  // react will pollyfill the function but for simplicities sake
  // ill assume your running this on an ever-green browser.
  requestIdleCallback(performWork);
}

function performWork(deadline) {
  loopThroughWork(deadline)
  if (nextUnitOfWork || workQueue.length > 0) {
    // if theres more work to be done. get to know when the browser will be occupied
    // and check if we can perform some work with the timing provided.
    requestIdleCallback(performWork);
  }
}

function loopThroughWork(deadline) {
    while (nextUnitOfWork && deadline.timeRemaining() > ENOUGH_TIME) {
    /**
     * perform unitofwork on a fiber if there's enough time to spare
     * from the browser's end.
     */
    nextUnitOfWork = performUnitOfWork(nextUnitOfWork);
  }
}
const CLASS_COMPONENT = "class";
// ...code

export function scheduleUpdate(instance, partialState) {
  workQueue.push({
    from: CLASS_COMPONENT, // we know scheduleUpdate came from a class so we have CLASS_COMPONENT here.
    instance: instance, // *this* object
    partialState: partialState // this represents the state that needs to be changed
  });
  requestIdleCallback(performWork);
}
view raw

The call to render also needs to update the workQueue

const HOST_ROOT = "root";
const HOST_COMPONENT = "host";

// code..

export function render(elements, containerDom) {
  workQueue.push({
    from: HOST_ROOT, // the root/parent fiber
    dom: containerDom, // document.getElementById("app") just a dom node where this fiber will be appended to as a child
    newProps: { children: elements }
  });
  requestIdleCallback(performWork);
}

Since the nextUnitOfWork is null on initial render, we need a function that gives us our first unit-of-work from the WorkInProgress tree.

function performWork(deadline) {
  if (!nextUnitOfWork) {
    // on initial render 
    // or if all work is complete and the nextUnitOfWork is null
    //grab the first item on the workInProgress queue.
    initialUnitOfWork();
  }
  loopThroughWork(deadline)
  if (nextUnitOfWork || workQueue.length > 0) {
    // if theres more work to be done. get to know when the browser will be occupied
    // and check if we can perform some work with the timing provided.
    requestIdleCallback(performWork);
  }
}

function initialUnitOfWork() {
   //grab the first item in the array
   // its a first come first serve scenario.
   const update = workQueue.shift(); 
  
   // if there are no updates pending
  // abort since there is no work to do.
  if (!update) {
    return;
  }

  // this call will apply if the update came from setState
  // we need the object passed in this.setState to the
  // partialState of the current fiber 
  if (update.partialState) {
    update.instance.__fiber.partialState = update.partialState;
  }

  const root =
    update.from === HOST_ROOT
      ? update.dom._rootContainerFiber
      : getRootNode(update.instance.__fiber);

  nextUnitOfWork = {
    tag: HOST_ROOT,
    stateNode: update.dom || root.stateNode, // the properties from the update are checked first for existence
    props: update.newProps || root.props, // if the update properties are missing default back to the root properties
    alternate: root
  };
}

function getRootNode(fiber) {
  // climb up the fiber until we reach to the fiber with no parent
  // This will give us the alternate property of each fiber if its not
  // the host_root, meaning the fiber at the very top of the tree
  let node = fiber;
  while (node.parent) {
    // as long as the current node has a parent keep climbing up
    // until node.parent is null.
    node = node.parent;
  }
  return node;
}

Let’s now define our performUnitOfWork function.

This function performs work on the Fiber that has been passed to it as a parameter. It then goes ahead to work on its children and finally works on its siblings and the cycle continues.

Small visual representation of how the work is done.
// ... code
function performUnitOfWork(wipFiber) {
  // lets work on the fiber
  beginWork(wipFiber);
  if (wipFiber.child) {
    // if a child exists its passed on as
    // the nextUnitOfWork
    return wipFiber.child;
  }

  // No child, we call completeWork until we find a sibling
  let uow = wipFiber;
  while (uow) {
    completeWork(uow); // completework on the currentFiber
    // return the siblings of the currentFiber to
    // be the nextUnitOfWork
    if (uow.sibling) {
      // Sibling needs to beginWork
      return uow.sibling;
    }
    // if no siblings are present,
    // lets climb up the tree as we call completeWork
    // when no parent is found / if we've reached the top,
    // this function returns null and thats how we know that we have completed
    // working on the work in progress tree.
    uow = uow.parent;
  }
}
The idea is, go down the tree traversing the siblings and children, then come back up completing work on each level. (We go down then come back up again)

Let’s define beginWork

// ...code
function beginWork(wipFiber) {
  if (wipFiber.tag == CLASS_COMPONENT) {
    updateClassFiber(wipFiber);
  } else {
    updateHostFiber(wipFiber);
  }
}

function updateHostFiber(wipFiber) {
  if (!wipFiber.stateNode) {
    // if this is the initialRender and stateNode is null
    // create a new node.
    wipFiber.stateNode = createDomElement(wipFiber);
  }
  const newChildElements = wipFiber.props.children;
  reconcileChildrenArray(wipFiber, newChildElements);
}

function updateClassFiber(wipFiber) {
  let instance = wipFiber.stateNode;
  if (instance == null) {
    // if this is the initialRender call the constructor
    instance = wipFiber.stateNode = createInstance(wipFiber);
  } else if (wipFiber.props == instance.props && !wipFiber.partialState) {
    // nothing has changed here
    // lets move to the children
    cloneChildFibers(wipFiber);
    return;
  }

  instance.props = wipFiber.props;
  instance.state = Object.assign({}, instance.state, wipFiber.partialState);
  wipFiber.partialState = null;

  const newChildElements = wipFiber.stateNode.render();
  reconcileChildrenArray(wipFiber, newChildElements);
}

function createInstance(fiber) {
  //similar to the previous implementation
  // we instanciate a new object of the class provided in the
  // type prop and return the new instance
  const instance = new fiber.type(fiber.props);
  instance.__fiber = fiber;
  return instance;
}

function createDomElement(fiber) {
  // check the type of the fiber object.
  const isTextElement = fiber.type === TEXT_ELEMENT;
  const dom = isTextElement
    ? document.createTextNode("")
    : document.createElement(fiber.type);
  updateDomProperties(dom, [], fiber.props);
  return dom;
}
We perform different operations for the various Fibers. Either class or host component

Since we have reconciled the host Fibers, we also need to reconcile their children as well.

// .. code


const PLACEMENT = "PLACEMENT"; // this is for a child that needs to be added
const DELETION = "DELETION"; //for a child that needs to be deleted.
const UPDATE = "UPDATE"; // for a child that needs to be updated. refresh the props

function createArrayOfChildren(children) {
 // we can pass children as an array now in the call to render
 /**
 * render () {
   return [
     <div>First</div>,
     <div>Second</div>
   ]
 }
 */
  return !children ? [] : Array.isArray(children) ? children : [children];
}

function reconcileChildrenArray(wipFiber, newChildElements) {
  const elements = createArrayOfChildren(newChildElements);

  let index = 0;
  // let the oldFiber point to the fiber thats been rendered in the
  // dom if its present. if its initialRender then return null.
  let oldFiber = wipFiber.alternate ? wipFiber.alternate.child : null;
  let newFiber = null;
  while (index < elements.length || oldFiber != null) {
    const prevFiber = newFiber;
    // we wither get an element or false back in this check.
    const element = index < elements.length && elements[index];

    // if the type of the old fiber is the same as the new fiber
    // we just need to update this fiber
    // its the same check as the one we had in the previous
    // reconciliation algorithm
    const sameType = oldFiber && element && element.type == oldFiber.type;

    if (sameType) {
      // on an update the only new thing that gets
      // changed is the props of the fiber
      // I should have spread this but for easier
      // understading and so that we understand where everything
      // goes and the underlying structure, Ill do what seemengly seems
      //like im repeating myself.
      newFiber = {
        type: oldFiber.type,
        tag: oldFiber.tag,
        stateNode: oldFiber.stateNode,
        props: element.props,
        parent: wipFiber,
        alternate: oldFiber,
        partialState: oldFiber.partialState,
        effectTag: UPDATE
      };
    }

    if (element && !sameType) {
      // this is when an element wasn't present
      // before but is now present.
      newFiber = {
        type: element.type,
        tag:
          typeof element.type === "string" ? HOST_COMPONENT : CLASS_COMPONENT,
        props: element.props,
        parent: wipFiber,
        effectTag: PLACEMENT
      };
    }

    if (oldFiber && !sameType) {
      // in this check  we see its when a component
      // was present, but is now not present.
      // like a deleted to do list.
      oldFiber.effectTag = DELETION;
      wipFiber.effects = wipFiber.effects || [];
      // we need to keep a reference of what gets deleted
      // here we add the fiber to be deleted onto the effects array.
      // we'll work with the effects later on in the commit stages.
      wipFiber.effects.push(oldFiber);
    }

    if (oldFiber) {
      // we are only interested in the siblings of the
      // children that are in the same level here
      // tree level here
      // in other terms we just need the siblings of the render array.
      oldFiber = oldFiber.sibling;
    }

    if (index == 0) {
      wipFiber.child = newFiber;
    } else if (prevFiber && element) {
      prevFiber.sibling = newFiber;
    }

    index++;
  }
}
We need to keep track of the children that need to be updated, deleted or appended as new components

we made a call to cloneChildFibers, in essence, the function gives the children of the parentFiber a new parent property. The parentFiber from the work in progress tree becomes their new parent. This is to replace the currentFiber of the node that has been rendered to the DOM. Let’s define it below

// .. code
function cloneChildFibers(parentFiber) {
  const oldFiber = parentFiber.alternate;
  // if there is no child for the alternate
  // there's no more work to do
  // so just kill the execution
  if (!oldFiber.child) {
    return;
  }

  let oldChild = oldFiber.child;
  // on initial render, the prevChild is null.
  let prevChild = null;
  /**
   * below we are essencially looping through all the siblings
   * so that can give them their new parent which is the workInProgress fiber
   * the other properties are hard coded as well.
   * I could have spread them but for understanding of the
   * structure given, We are not going to spread them here.
   */
  while (oldChild) {
    const newChild = {
      type: oldChild.type,
      tag: oldChild.tag,
      stateNode: oldChild.stateNode,
      props: oldChild.props,
      partialState: oldChild.partialState,
      alternate: oldChild,
      parent: parentFiber
    };
    if (prevChild) {
      prevChild.sibling = newChild;
    } else {
      parentFiber.child = newChild;
    }
    prevChild = newChild;
    oldChild = oldChild.sibling;
  }
}

Now that we have cloned all the children and there’s nothing else left for us to do. It’s finally time to complete the work done and finally flush the changes to the DOM.

// ...code

let pendingCommit = null; // this is what will be flushed to the dom
// ... code


function completeWork(fiber) {
  // this function takes the list of effects of the children and appends them to the effects of
  // the parent
  if (fiber.tag == CLASS_COMPONENT) {
    // update the stateNode.__fiber of the 
    // class component to the new wipFiber (it doesn't deserve this name anymore since we are done with the work we needed to do to it)
    fiber.stateNode.__fiber = fiber;
  }

  if (fiber.parent) {
    // append the fiber's child effects to the parent of the fiber
    // the effects of the childFiber
    // are appended to the fiber.effects
    const childEffects = fiber.effects || [];
    // if the effectTag is not present of this fiber, if there are none,
    // then return an empty list
    const thisEffect = fiber.effectTag != null ? [fiber] : [];
    const parentEffects = fiber.parent.effects || [];
    // the new parent effects consists of this current fiber's effects +
    // effects of this current Fiber + the parent's own effects
    fiber.parent.effects = parentEffects.concat(childEffects, thisEffect);
  } else {
    // if the fiber does not have a parent then it means we
    // are at the root. and ready to flush the changes to the dom.
    pendingCommit = fiber;
  }
}
completeWork.

completeWork simply takes a Fiber and appends it’s own effects to the Fiber’s parent. It also takes a step further to append the effects of the Fiber’s child to the parent of the current Fiber. Once there is no parent Fiber then it means that we have reached the very top of our tree and now set pendingCommit to the Fiber with no parent.

Keep in mind these effects are what we will use to determine the kind of operation we need to apply to the Fiber on the DOM.

We now need a way to tell the performWork function to finally flush the changes to the DOM-based off of thependingCommit

function performWork(deadline) {
  // ...code
  if (pendingCommit) {
    commitAllWork(pendingCommit);
  }
}

function commitAllWork(fiber) {
  // this fiber has all the effects of the entire tree
  fiber.effects.forEach(f => {
    commitWork(f);
  });
  // the wipFiber becomes the currentFiber
  fiber.stateNode._rootContainerFiber = fiber;
  nextUnitOfWork = null; // no work is left to be done
  pendingCommit = null; // we have just flushed the changes to the dom.
}

function commitWork(fiber) {
  if (fiber.tag == HOST_ROOT) {
    return;
  }
  let domParentFiber = fiber.parent;
  while (domParentFiber.tag == CLASS_COMPONENT) {
    domParentFiber = domParentFiber.parent;
  }
  const domParent = domParentFiber.stateNode;
  if (fiber.effectTag == PLACEMENT && fiber.tag == HOST_COMPONENT) {
    // add the new element to the dom
    domParent.appendChild(fiber.stateNode);
  } else if (fiber.effectTag == UPDATE) {
    // update the dom properties of the element.
    updateDomProperties(fiber.stateNode, fiber.alternate.props, fiber.props);
  } else if (fiber.effectTag == DELETION) {
    // remove the node from the DOM its not needed
    commitDeletion(fiber, domParent);
  }
}

function commitDeletion(fiber, domParent) {
  // this function
  // removes the siblings of the current fiber
  // if a sibling is not present jump back to the parent
  // of the fiber. This is if the node is not equal to the fiber
  let node = fiber;
  while (true) {
    if (node.tag == CLASS_COMPONENT) {
      // check the child of the class component.
      // then loop back.
      node = node.child;
      continue;
    }
    domParent.removeChild(node.stateNode);
    while (node != fiber && !node.sibling) {
      // if there are no siblings jump back up to
      // to the node's parent.
      node = node.parent;
    }
    if (node == fiber) {
      return;
    }
    node = node.sibling;
  }
}

commitAllWork runs through all effects calling commitWork on each one. Commit work then either make a call to commitDeletionin case of deletion or adds the stateNode to the DOM in case of placement or simply updates the DOM properties in case of an update. This is all determined by the effectTagon the fiber.

We are done now 🎉.

It’s testing time.

Build the module

$ yarn run build:all

Open up an example file you have in your example folder and see everything still works.

For a full-blown app

Follow along with this repository’s setup to see how you can make a demo app with Webpack and babel using your React clone.

Don’t use this module in production either. 🙂

Main Take-Aways

The core React team went a long way to optimize the React codebase to be as fast for your apps as possible.

We can give React an easy time by

1. Offloading expensive tasks to web workers.

This frees up the main thread to handle things like animations and makes your apps even more responsive.

2. Have fewer divs and wrappers.

Each wrapper is a Fiber, the more they are the more time it to reconcile your updates and flush the changes to the DOM. Use fragments to avoid having many unnecessary nodes and fibers that have to be traversed for little to no reason.

Fragments

We couldn’t fit adding fragments into this tutorial but what React does, when it identifies a fragment in the tree, it skips over it and goes to its children. Here is an example in the React codebase of how React deals with updating fragments

3. Use Hooks.

Hooks help reduce the heavy nesting of your components that are introduced by higher-order components and render props. If there is too much nesting your apps become slower.

If you follow any of the above 3 steps /all of them, React and the main thread will not have to do as much work to give your users a seamless user experience.

Hooks help reduce the heavy nesting of your components that are introduced by higher-order components and render props. If there is too much nesting your apps become slower.

If you follow any of the above 3 steps /all of them, React and the main thread will not have to do as much work to give your users a seamless user experience.

References:

This video provides a basic understanding of modern React

The post Building Your Own React From Scratch (Part 2) appeared first on Andela.

Getting Started with Distributed Teams

How to Organise Your Own Tech Conference: Lessons From PyCon Africa 2019.

$
0
0

I was honored to facilitate a workshop at the first-ever pan-African python programmers’ community conference – PyCon Africa – that was hosted in Accra, Ghana. 

Accra is not only a beautiful city to visit, do some shopping and also take pretty pictures in; it also has the most interesting people that I have ever interacted with outside of Uganda. Tourists often say that a country has nice people, perhaps because they do not have any other noteworthy memories to share about their visit, but Ghanaians are actually nice people! So much so that Billa, who Ubered me to my hotel one evening after the day’s events, invited me to share a meal cooked by his wife, who also happens to be his favorite cook, just because I quizzed him a bit about local Ghanian cuisine as an icebreaker during the trip.

The venue was a conference hall in the University of Ghana which is oddly named The Bank of Ghana Auditorium. The conference was chaired by the tranquil Marlene Mhangami, who also sits on the board of the Python Software Foundation (PSF). Keynotes were delivered by African big shots within the python community, like Anna Makarudze, the Vice President of the Django Software Foundation (DSF); and also others who use python regularly to do world-changing and groundbreaking work, like Moustapha Cisse the Head of Google’s AI Center in Accra. The hall provided an auditorium for keynotes, talks, and main announcements; two breakout spaces for workshops, tutorials and more talks; and a large cafeteria area for selfies, sprints, networking, and meals.

The talks touched themes like community – with the majority of speakers sharing their experience on how they built communities back home, Data and AI – with some speakers sharing major technical breakthroughs in Artificial Intelligence for solving African problems, Programing Day-To-Day –  with talks focusing on best practices, testing, etc, and The Human Behind The Programmer – with talks about the health of the programmer on subjects like burnout.

Now that you are all caught up on the event, here are some of the lessons – including highlights for you who want to host PyCon (or any other tech event) for your local programming community.

1. A successful event does not have to be a huge one with a massive guestlist:

PyCon African hosted less than 300 people, most of whom were students at the University of Ghana. The workshop I facilitated on Using Pandas To Makes Sense of Data was attended by less than 20 participants. It is because of these manageable numbers that I was not only able to approach and interact with all the speakers who picked my brain, but I was also able to individually assist each participant in the workshop to keep at pace.

2. Budget, Sponsors, and Funding:

Money makes the world go round, and it also makes organizing conferences easy. PyCon Africa managed to attract sponsorships from other well-wishing organizations and communities like PSF, DSF, and major Python and Django communities around the world. More importantly, PyCon managed to attract corporate sponsors like BriteCore and Andela because it managed to position itself as a source for talent which these companies could find valuable. PyCon Africa also sold tickets at different prices for different participants as a way to raise money. All speakers at the conference including the organizers were expected to purchase a ticket to participate in the conference.

3. Planning for Venue, Talks, Workshops and the Audience:

It is very important to anticipate the kind of Audience when making plans for the venue, talks, and workshops. Depending on the level of expertise you expect within the audience, it is important to balance the kind of talks between beginners, intermediate, and experts. Also, you have to try to choose a venue that is most befitting for the conference, based on the target audience. A University hall may be perfect for a student audience but may not be appropriate for an audience made up of businesspeople.

Some of the topics to consider when planning your talks could include but not limited to; Programming Talks, Data Talks, Community Talks, Social Impact Talks and Product Pitches.

4. Scheduling and Time Keeping:

It is obviously important to have a pre-planned schedule beforehand, but it is even more important to have an alternate plan, at least, which will allow you to be flexible. Speakers cancel, breakfast can delay, electricity and other electronic issues can arise. Things change! You have to be in a position to adjust with as little pain as possible radiated off to your guests.

One of the things that PyCon Africa did not do well was to stamp the title of the talk happening in each room on the door. That made it slightly harder for participants to identify what room they wanted to be part of. It is possible to build a simple application that manages all the scheduling for you. This makes it easier for people to be informed when events change or even notified when they subscribe to events during the conference.

5. Guarantee of respect for diverse people:

In order to be able to score any sponsorship from the PSF, you need to have a written Code of Conduct. This is a document that guarantees that all people regardless of race/ethnicity, gender, sexual orientation, ideology, etc shall be treated with kindness and respect, but shall also be expected to respect and treat others with kindness. I’d still recommend that you have a similar document even if you’re not planning to score any sponsorship from the PSF; because this kind of guarantee gives your audience the confidence to express themselves and interact freely during the conference.

Best of luck as you plan towards organizing your next (or first) tech conference in your community!

The post How to Organise Your Own Tech Conference: Lessons From PyCon Africa 2019. appeared first on Andela.

Dockerizing Rails: One Workshop To ‘Dockerize’ Abuja

$
0
0

We were in Abuja, Nigeria, over the weekend to organize Andela’s first technical workshop in the city. All the weeks of planning paid off, as the turnout was pretty impressive. There was an engineer who came all the way from Benin City for the workshop. If sojourning the length of half the country to arrive just shy of first place at a workshop doesn’t spell passion, I don’t know what does. (Respect, Mark Edosa, if you’re reading this.)

The workshop was themed Dockerizing Rails (now that title makes sense, all of a sudden). It was led by Igbanam, who made an excellent show out of it. Igbanam had to modify the title a little bit, to “Dockerizing Apps”, seeing as a major number of engineers who signed up for the workshop were JavaScript engineers.  That was fun. We also had our technical recruiters around to talk to the engineers in the room about the possibility of working for Andela as remote software engineers in Abuja. This is a program we’re exploring at the moment, and this workshop was a good place to kick things off.

Abuja seemed like a natural choice for us to hold our first workshop outside of Lagos in Nigeria. (They are a somewhat regular feature in the cities we operate in across the continent.) No other city boasts a developer ecosystem as vibrant as the one in Lagos for obvious reasons. But cities like Abuja, Uyo, Kaduna, etc aren’t too far behind. Andela is a distributed engineering company, and we wield remote work adeptly. We have remote teams in Accra and Cairo already, and our operations are going to continually scale up.

The Fireside Chat:

Fireside chat with Osoba (Marketing & Comms Mgr, Andela Nigeria)and Igbanam (Senior Software Engineer)

We also had a fireside chat right after the workshop. It served as an avenue to, among other things, unpack what it means to work as a software engineer at Andela. It kind of helped smoothen the path for our technical recruiters who came on afterward to talk about specifics on how to apply to join the company.

Wielding Remote Work:

Our technical recruiters taking questions from the audience.

One question that came up a couple of times while our recruiters took the stage was on how the remote set up would work. The folks who asked were mainly curious about how the support system worked. Would there be anyone to unblock them if they were stuck? Is there an office to go to if one needed to meet with a colleague? Our recruiters clarified that software engineers at Andela are part of distributed teams collaborating in real-time across several time zones. They always have the support of their team members and have other support staff working to ensure that they are able to do their best work without hassles.

In case you missed the workshop, and are keen on joining Andela as a software engineer, check out our careers page. We’re hiring!

Read: How To Organise Your Own Tech Conference: Lessons From PyCon Africa 2019.

The post Dockerizing Rails: One Workshop To ‘Dockerize’ Abuja appeared first on Andela.

The Future of Andela

$
0
0

We started Andela five years ago to solve a simple but pervasive global challenge: Brilliance is evenly distributed, but opportunity is not. While our mission will never change, our strategy to achieve it has evolved as we’ve grown and learned more about both our market as well as the structural challenges that prevent brilliance and opportunity from connecting. 

Our initial strategy was to identify high-potential talent on the African continent, train them in software development (with a heavy emphasis on remote work and soft skills) and then place them as full-time distributed engineers. We saw an opportunity to build a business while investing in talent creation across Africa, and that’s exactly what we did.

Today, Andela is the most elite engineering organization in Africa, representing over 1500 engineers and working with more than 200 of the world’s most respected technology companies. We’re also on track to nearly double our revenue year over year. As the talent world has evolved, we have as well, and over the past few years it’s become increasingly clear that the world needs what Andela provides: high quality engineering-as-a-service. It’s also become clear, however, that the majority of the demand is for more experienced talent. 

As a result of that, we began sourcing and assessing mid-level and senior engineers, and they now represent more than 25% of our talent base.

While placing teams led by senior engineers has helped drive additional junior placement, it hasn’t been enough. We now have significantly more junior talent than we are able to place. Just as important, those junior engineers want, and deserve, authentic work experience that we are not able to provide. As a result, we’ve come to the conclusion that Andela’s next phase of growth requires a strategic shift in how we think about talent.

Historically, we have viewed our talent supply as being primarily junior with some mid-level and senior engineers. Moving forward, we’ll be shifting our approach to be focused on senior talent, with junior talent layered in on top of it. While nuanced, this shift in focus will allow us to better align with what the market needs, and in the process better connect brilliance with opportunity at all levels.    

As part of this shift, we have also had to make an extremely difficult decision as it relates to a number of talented junior engineers. Today, we are announcing that we are closing the D0 program in Nigeria, Kenya, and Uganda. Moving forward, we will be focusing D0 training efforts on our pan-African hub in Rwanda. In addition, we will be letting go of approximately 250 Andelans in Nigeria and Uganda, with an additional 170 potentially impacted in Kenya, who we don’t believe we’ll be able to find meaningful work for over the next year. 

The well-being of our employees, both past and present, is our immediate priority. We are providing holistic support programs for those who are affected by this shift, including ongoing access to learning programs and job placement services. We have committed a range of financial and emotional resources to former employees, and those who are leaving will continue to have access to the strongest engineering network on the African continent. Once an Andelan, always an Andelan. 

In addition, we’ve partnered with innovation hubs in each country (CcHUB in Nigeria, iHub in Kenya, and Innovation Village in Uganda) to help connect the impacted developers with opportunities in their local ecosystem. Together, we have identified over 60 companies who are looking to hire top quality junior engineering talent. In addition, these hubs will offer impacted engineers the opportunity to use their co-working spaces free of charge for the next three months. 

Going forward, we will hire another 700 experienced engineers by the end of 2020 in order to keep up with demand from our partners. To continue creating junior engineering talent at scale, we will invest in the Andela Learning Community, through which we’ve already trained more than 30,000 learners in software engineering fundamentals. Over the next three years, we plan to cultivate more than 100,000 engineers across the continent who will, in time, contribute to the growth of their local tech ecosystems as well as the broader technology community.  

All too often, opportunity is limited by race, gender, and nationality. We’re working to chip away at this by placing African engineers on global tech teams and, in the process, changing the world’s perception of talent. 

No story of growth is perfectly smooth, and these last few weeks have been amongst the hardest. Yet despite this, I’m confident that we’ll emerge stronger and more connected both to the market we serve and to the mission we are working to advance. And I look forward to trying to hire back many of these extraordinary engineers down the road – if you don’t get there first. 

The post The Future of Andela appeared first on Andela.


Tapping into your Dev Beast Mode

$
0
0

Before jumping into the gist of this article it is important to understand what it hopes to achieve by the time you finish reading. Unlike the magician above, I am willing to share with you a simple secret trick I stumbled upon recently while trying to deliver a certain work task.

Do you feel overwhelmed by your current work/study load? Do you enjoy your free time, grabbing some drinks or watching a game? Is the work-life balance suffocating you? Would you like to impress your new boss?

If you answered yes to any of the above questions, then surely this article is for you. After reading, you should be able to harness the times when you are most productive, capitalize on them to produce your best works of art while managing to create enough time for you to have sufficient personal time and enjoyment.

Almost everything will work again if you unplug it, including you. — Anne Lamott

Why Beast Mode?

As humans, we are inherently flawed in various aspects of our own lives, these flaws are characterized by 3 phases i.e. Minor FlawMajor Flaw, and Fatal Flaw. Now it is important to be able to recognize what phase a certain flaw currently is in as that way you can be best situated to assess how much needs to be done to transition it from for example Fatal to Non-existent.

For the purposes of this article, we shall have a brief overview of what each phase could mean.
Minor Flaw — likely to slow-down progression but very possible to avert if noticed.
Major Flaw — likely to deter or stop progression for a certain period of time and if not dealt with has permanent consequence.
Fatal Flaw — prevents the start of progression and possibly affects surroundings and people.

Beast Mode is a close to ideal atmosphere whereas a person you function in a vacuum that is isolated and free of flaws.
While in this state, the human brain is believed to be most creative, acute, resourceful and productive hence the nomenclature. This is also why we are now going to address the pertaining question of how to get you there.

via GIPHY

At this point in time, you’re probably as confused as this dog is after reading the section above, right? But worry not as it is at this point that we shall draw the nexus between ideology and fact.

While building up to the writing of this article which was meant to be published a week before, I had one Fatal Flow which was a lack of focus and this prevented me from starting my writing. Throughout the week I had failed to find the vacuum that is Beast Mode to allow me to deliver an article worthwhile.

The path that gets you to Beast Mode has but only 3 gateways that need to be unlocked before you are allowed into the promised state that is Production Haven. In no particular order, these are the key aspects that need to be addressed as a means of finding your way into the desired state of Beast Mode; PurposeEnvironment/Atmosphere, and Timing.

“Your purpose in life is to find your purpose and give your whole heart and soul to it”- Buddha

When driving a vehicle to a certain destination, fuel is a prerequisite and a driving force. Be it an electric vehicle, electricity would still act as that source of energy. Well, the same goes without saying for Beast Mode. It runs on huge chunks of energy that are simply fuelled by purpose.

The simple reason as to why purpose is important is because, with purpose at the forefront of whatever to do, you’re able to set goals, timelines, and steps that would be required for you to complete a certain desired task. Otherwise, without having the purpose to set your goals, one would easily run out of energy to go on as you would have no clear sight of what there is to be achieved and without this, you simply have no reason to go on.

Now purpose can be anything that pushes you and cheers you on, it could be that promotion, bonus, more hours of sleep, or even that date you want to go on. A goal, however, would need to be something measurable. For example, as a developer, your goal could be to have a PR/MR raised and merged by EOD.

Neither comprehension nor learning can take place in an atmosphere of anxiety. — Rose Kennedy.

These two aspects work hand in hand and usually one leads to the other. Either an optimum atmosphere could be influenced by timing or vice versa. However, it is equally important to have both put into consideration. On that note, it would be very helpful if we had more context into what is meant here.

Environment/Atmosphere — is the situation, surrounding or current physical location within which you exist at the moment of initiating Beast Mode.

Timing — this is as the title suggests, the period in which you choose to execute the Beast Mode or it can also be the period in which you intend to be in Beast Mode.

As a Developer or even a business professional, it is important to identify the conditions which align to allow you to focus and be able to dedicate a minimum of 30 minutes to one activity without being interrupted. These conditions differ amongst different individuals, some of us work best with music while others don’t, some people are unbothered by background action while others aren’t. It is at this point that you should then consider what defines your ideal atmosphere. A few things to note are that it should be free of distractions and serene enough to let you focus your chi on getting work done.

A positive atmosphere nurtures a positive attitude, which is required to take positive action. — Richard M. DeVos

The aspect of timing is important in this situation because certain times dictate specific atmosphere, most developers are lone creatures and which is why a couple of you might enjoy late-night hours to get work done without anyone around or you are quite good at multi-tasking and don’t mind a conversation with a colleague while working. However, it is paramount to know under which category you fall, because if stationed within the wrong timing and atmosphere you are surely not going to have much done seeing as you would be in an illusion form of Beast Mode.

Beast Mode is sacred and should, therefore, be treated that way, what this means is that you would need to sacrifice anything that can set you off the path drawn out by your purpose. All distractions serve to waste your time and energy and as earlier stated, Beast Mode runs on energy that can be depleted if not used purposefully.

After putting into consideration all the factors above, you would then be in a position to switch life forms and transition into what we have now come to know as Beast Mode. With this, you have unlocked the possibility of limitless creativity, focus and purpose-driven development.

A task that would have taken you an entire week would be jotted down to a matter of hours or days allowing you to take some time off and bask in the sunshine or take that dear one out on the promised date. At the end of the day, everyone is happy and it’s a win-win situation. You have completed your week’s required task, your boss has received their requested feature and your partner or friends can look forward to that promised night out.

All it required was some purpose, sacrifice for an ideal atmosphere and perfect timing. What better way to crown this up than with some words from one of the Greatest Beast of Soccer.

Success is no accident. It is hard work, perseverance, learning, studying, sacrifice and most of all, love of what you are doing or learning to do. — Pele

READ: How to manage communication as a distributed product manager.

The post Tapping into your Dev Beast Mode appeared first on Andela.

An Introduction to Python Generators and Coroutines

$
0
0

Before we get into the topic, let’s get some definitions right first:

  1. Iterator – this is an object that can be iterated upon. An iterator object returns data, one element at a time. Some of python’s containers that are iterable include lists, tuples, strings. These have the __iter__ method defined, which when called returns an iterator object.
  2. Generator – a simple way of creating iterators defined above.
  3. Coroutine – same concept as generators but we’ll see how they differ later in the article.

To better illustrate, consider a list:

 my_list = [1, 2, 3, 4, 5]
  iter_obj = iter(my_list) # calls __iter__ , and an iterable object is returned
  next(iter_obj) # prints 1
  next(iter_obj) # prints 2

And so on, till we have gotten all items from the list after which the StopIteration exception is raised. By the way, we can also call iter_obj.__next__()

In order to build your own iterators, you have to implement a class with the __iter__ and __next__ methods. At times this can be very cumbersome, and that is where generators come in. Generators allow us to easily create iterators. All implementations mentioned above are not required as they are automatically handled. It’s fairly simple to create a generator in python: as easy as defining a normal function with the yield statement rather than the return statement. If you’re wondering what’s the difference between these two terms; return terminates a function while yield pauses the function while saving its state and will continue from where it left off on successive calls.

As an example, we can define a simple function that is a generator for Fibonacci numbers. (Some people might argue that the sequence starts from 0 but let’s focus on the python here :

 def fibonacci(limit):
   n2 = 1
   if limit >= 1:
       yield n2

   n1 = 0

   for _ in range(1, limit):
       n = n1 + n2
       yield n
       n1, n2 = n2, n

To get the data, we can do:

Fib = fibonacci(5) # we limit the number of elements we get to 5
next(fib) # prints 1
next(fib) # prints 1
next(fib) # prints 2
next(fib) # prints 3
next(fib) # prints 5
next(fib) # StopIteration exception is raised

Simple, isn’t it? On to coroutines now. Please note that in this article I talk about simple coroutines rather than native coroutines.

We’ve seen that generators allow us to pull data and pause execution from a function context. Coroutines allow us to push data. In this case, the yield statement basically means “Wait until you get some input data”. We also use the yield statement a bit different as we shall see in a bit. 

We’ll go through a very simple example which provides a good basis: Imagine we have a list of names and want to find out if some names we have in mind (we don’t know how many since they’ll be input at runtime) are in this list.

def check_name_exists():
   names = ["Dennis", "Nick", "Fury", "Tony", "Stark"]
   print("Ready to check for names.")
   while True:
       name = yield
       if name in names:
           print("Found")
       else:
           print("Not found")

We can now do:

coro = check_name_exists() 

However, a coroutine can’t start receiving data right away and we’ll get an error when we try to do so. We first need to prime it. This can be done easily by doing:

 coro.send(None) # prints out Ready to check for names.

Now, we can start sending some data to our coroutine, and this is done by making use of the send method.

coro.send("Dennis") # prints out Found
coro.send("Captain") # prints out Not found

An important thing to note is if our generator or coroutine does not have a breakpoint, we can manually stop it by doing:

coro.close()

That’s it for now, I hope this was helpful in beginning to understand generators and coroutines. You can read more from these resources:

READ: How to build your own version of React from scratch (Part 1)

The post An Introduction to Python Generators and Coroutines appeared first on Andela.

Silicon Slopes’ Podcast: Meat and Potatoes, feat. Andela’s Melanie Colton

Andela & GitHub Partner to Host CodeNaija 2019; Nigeria’s Biggest Hackathon Event of the Year

$
0
0

We’re excited to announce that we will be hosting the CodeNaija 2019 hackathon event in partnership with GitHub’s Black Employee Resource Group – Blacktocats. The two-day intense hackathon themed “building technology for social good” has been scheduled for the 26th and 27th of October, 2019. 

The participating teams of the CodeNaija hackathon will be made up of some of Nigeria’s top software engineers who will create prototype solutions that could help Africa solve some of its biggest challenges around finance, healthcare, education, and agriculture. Participating teams will have access to Flutterwave and other third party APIs for their use while also leveraging GitHub’s community. 

The goal of the hackathon is to highlight the existing community of Nigerian software engineers by showcasing their engineering craft as they build technology solutions for social good. Partners and sponsors of this hackathon all share a common interest in enabling software engineers to build technology solutions that will help Africa solve some of its biggest challenges, which is the reason we’ve strategically aligned to host a hackathon of this magnitude in Nigeria. 

Technical and business mentors will be provided by partners to serve as supportive pillars for teams throughout the hackathon. Participating teams will pitch their solutions to a panel of carefully selected judges from GitHub, Microsoft and Flutterwave; who will be responsible for selecting the winning teams.

There will be multiple prizes for the top five teams, but to highlight the grand prize, the winning team will be given the opportunity to pitch to Microsoft’s Venture Capital Firm, M12.

We have put out a call for applications across multiple media channels for mid-level and senior engineers who might be interested in the hackathon to apply.

If you’d like to be a part of the hackathon, sign up on this link. Deadline to apply: 15th October 2019

To learn more about the hackathon, follow us on social media – on Twitter and on LinkedIn.

The post Andela & GitHub Partner to Host CodeNaija 2019; Nigeria’s Biggest Hackathon Event of the Year appeared first on Andela.

Practices and behaviours of highly productive remote teams

$
0
0

I have worked remotely as a Software engineer with several companies for over four years now. This article is a documentation of some of the things that I have learned and practiced that I think are instrumental in maintaining highly productive remote teams. It is worth noting that high productivity in any team is a function of the right people, the right practices and the right tools. The information explained below assumes that you have the right people within your team. Below are some of the practices that you can incorporate to increase productivity.

1. Invest in an intentional culture that is geared towards driving engagement and interaction between team members. Culture is something that each company should decide on but the general direction should allow for maximization of bonds and empathy between team members. Initiatives such as sharing monthly update photos about the lives of the team members can go a long way in strengthening bonds. Building bonds between team members is important as it sets a very good basis for collaboration.

2. Keep all conversations about the product in group channels where everyone can follow. This one is particularly important when you have some members of your team on-site and a few others working remotely. All product conversations must be moved to a common channel/group such that the entire team can easily follow. This ensures that the team maintains a shared understanding, direction, and context on discussions regarding the product. A lack of this could potentially lead to duplication of efforts or worse.

Don’t underestimate the value of putting in place a process that facilitates collaboration to drive the ever-necessary shared understanding of the team direction and goals.

3. A good internet connection is a must-have for people working remotely. The internet provides a platform that enables people to connect and work together irrespective of geographical location. This means that the quality of the internet connection directly affects the ability of people working together irrespective of location. A poor internet connection can be the difference between a productive meeting and an unproductive one.

4. Invest in a good video conferencing product that facilitates team meetings. One of the challenges of remote work is the lack of in-person communication. Much as we can’t fully substitute this, a good video conferencing tool such as Zoom can go a long way in bridging the communication challenge. I would recommend a tool that can provide HD video calls, high-quality audio calls, screen sharing, and whiteboarding at the very least. Screen sharing can be used during presentations or even pair programming sessions between colleagues.

5. For any meeting, if one of the teammates is working remotely then it makes sense for everyone in the meeting to join in on the call from their machines. This ensures that all the conversations are happening on a platform that can be accessed by all the members. This protects us from a scenario where some of the team members that are colocated engage in side-discussions that are harder to pick up by the person working remotely.

6. It is important to maintain some overlap hours where the entire team is online/reachable. This is important because it defines a time boundary within which any team member can reach out to another for assistance or even collaboration on a task. It is okay for team members to reach out to each other outside these hours but defining these hours helps set expectations for teammates. It also helps them plan their work and tasks around this. The number of hours can be decided on a company basis. From my experience, I have found 5hours to be a good start.

When the budget allows, plan for at least one company offsite in a year. Company off-sites are a good time to enforce culture and strengthen bonds within teams. The casual friendly atmosphere that off-sites create facilitates the strengthening of bonds that contribute directly to employee satisfaction, performance, and longevity. Off-sites have also been known to fan the creative flames within teams.

READ: How to manage communication as a distributed Product Manager

The post Practices and behaviours of highly productive remote teams appeared first on Andela.

Viewing all 615 articles
Browse latest View live