Oct 16, 2023 - Guidelines for addressing requests from colleagues

In the workplace we are constantly receiving requests from colleagues, and the more senior we get, the higher the demand for us. That’s natural and expected as we progress in our carrers and take on leadership roles.

The way we address these requests has a direct impact on how we are perceived within the company and in the collaborative work environment we want to foster. I see them as opportunities to exercise communication and leadership skills, share knowledge, give feedback and positively impact my team and the business.

Let’s take a look at how we can approach requests effectively, how seniority plays a role and analyze a few illustrative examples.

Guidelines

In a nutshell, our communication in these situations should be driven by the following guidelines:

  • Employ empathy and nonviolent communication
  • Promote a cooperative, trusting, and supportive environment
  • Empower colleagues’ autonomy
  • Advance colleagues’ technical skills
  • Focus on results

Before responding to a request, reflect on whether your communication meets most of the above criteria, and whether it doesn’t directly go against any one of them. If your communication is aligned with this, proceed with confidence.

Notice I deliberately highlighted ‘autonomy’, since I believe it plays a central role. In this context, it means that we should encourage our colleagues to seek solutions to their requests as autonomously as possible. It’s like the old saying: ‘Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime’ (even though sometimes we’ll have no option other than give the fish - due to short deadlines, lack of experience of the courterpat, etc).

Seniority

Before heading to examples, a quick note on how the seniority level from our counterpart affects the way we communicate.

Junior-level colleagues require more care and attention in communication, as they are in an early stage of professional development and need the consideration and support of more senior colleagues to help them grow. Positive and constructive feedback should be given frequently, and we should push them out of their comfort zone, but in a healthy and considerate way.

In senior-to-senior relationships I encourage some friction. We need to have the courage to be honest with each other without beating around the bush, and trust in the maturity of our colleagues to receive feedback that is less polished, but constructive and insightful. This can speed things up and foster a more dynamic environment.

Examples

To illustrate below are examples of typical reponses we often give. The first sentence in red shows an inadequate response according to the guidelines. The following sentence in green shows a more appropriate response. It is important to note that there isn’t only one way to act in each situation, and depending on your background and leadership style, your communication will vary. Additionally, keep in mind that we’re examining scenarios in which junior professionals seek assistance from their senior counterparts.

A)

“Hi Alice, I don't have time to help you, I'm too busy.”

This is a cold response, if you reply like this a lot then you’re not making yourself available to your colleagues, creating the perception that the door to collaboration is closed.

“Hi Alice, unfortunately I'm very short on time today due to a priority activity I need to finish by the day's end. Is this urgent, or can we come back to this topic tomorrow? You can also check if Bill is available. Please keep me informed either way.”

This response shows adequate empathy, as it not only provides insight into your own situation but also expresses genuine interest in understanding the urgency behind the request. It further demonstrates cooperation by suggesting a schedule for assistance, possibly the next day. It also indicates alternative ways, such as exploring the availability of another colleague, possibly familiar with the subject, who could provide assistance.

B)

“Bob, you should be able to do this by yourself. Try harder”

This can be characterized as a harsh response, especially for people at a more junior level, as, it potentially brings an unfavorable judgment of the person’s ability, and closes the door to collaboration.

“Bob, kindly share what you've attempted so far, along with any unsuccessful attempts. This will help me grasp the context and offer more tailored guidance.”

This response aims to comprehend prior efforts through clear and well-crafted communication, naturally fostering the culture of autonomy valued within the company. It conveys the expectation that individuals have made an effort to address the issue independently before seeking assistance.

C)

“Hi Alice, look, I've explained this to you several times, do some research.”

Again, an example of a harsh response. Even though a subject may have been already discussed in the past, it’s essential to maintain an environment of trust and companionship by employing a more thoughtful communication.

“Hi Alice, I recall encountering this issue on multiple occasions in the past, and I believe that by revisiting our previous solutions, we can resolve it effectively. Please consider retrieving the approach we took when dealing with a similar situation, such as 'XYZ'. If you have any questions, feel free to reach out. I'm here to assist.”

Here, we assume an honest oversight without passing judgment, recognizing that we can sometimes struggle to recall past situations and apply them to new ones. With time and guidance, we can enhance this skill and progress. This presents a valuable opportunity to support a colleague’s technical development and foster their autonomy.

D)

“Bob, don't worry, I'll take a look and solve this problem by myself, I think it will be faster. I'll let you know when I'm done.”

At first glance, this response appears cooperative and results-oriented. However, it contradicts the autonomy guideline by depriving the individual who sought help of the opportunity to learn and grow from the challenge. This approach can inadvertently foster dependency, as the person may not acquire the skills to handle similar situations independently in the future.

Unless there’s a good reason to ‘hijack’ the problem, for instance due to a short deadline, the following approach is recommended:

“Bob, this is a very interesting problem. Let's discuss it, and I'll assist you in finding a solution. I anticipate it might arise in future scenarios, making it crucial to solidify this learning.”

Now we see the clear mentoring approach, emphasizing the development of colleagues and a forward-looking perspective.

Closing Thoughts

The more senior a team member becomes, the higher the expectation for them to have a positive impact on the team and subsequently in the business, inspiring and pushing their colleagues beyond their comfort zones, and thus earning recognition as a leader among peers.

It’s far from easy; in fact, it’s quite challenging to assume the responsibilities of mentoring and take a prominent role within the team. Effective time management will be essential to strike a balance between safeguarding our personal time and remaining accessible to the team. This means prioritizing tasks, setting clear boundaries, and ensuring that our schedule allows for both focused work and availability to assist our colleagues. By efficiently managing our time, we can fulfill our leadership roles without becoming overwhelmed or unavailable when needed.

The leadership challenge, whether in a managerial or technical role, is substantial, but the personal growth it brings is truly worth the effort.

Apr 21, 2023 - Setting up a reverse proxy using nginx and docker

A reverse proxy is a web server that sits between client devices and backend servers, receiving requests from clients and directing them to the appropriate server. The backend servers are shielded from direct internet access and their IP addresses are hidden, providing an additional layer of security.

reverse-proxy

Overall, reverse proxies are useful for improving security, performance, and scalability of web applications and services. They’re commonly used for load balacing traffic between any number of backend servers, and for SSL offloading.

In this brief post I provide a template for setting up an nginx reverse proxy using docker.

Docker-compose

This is a docker-compose file with two services, the nginx web server that will act as a reverse proxy, and a certbot agent for enabling SSL connections to it:

version: '3.3'

services:

  nginx:
    image: nginx:1.19-alpine
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./volumes/nginx:/etc/nginx/conf.d
      - ./volumes/certbot/conf:/etc/letsencrypt
      - ./volumes/certbot/www:/var/www/certbot
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"

  certbot:
    image: certbot/certbot
    restart: always
    volumes:
      - ./volumes/certbot/conf:/etc/letsencrypt
      - ./volumes/certbot/www:/var/www/certbot
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

The nginx service uses the official Nginx Docker image with version 1.19-alpine. It maps ports 80 and 443 to the host machine, which allows incoming HTTP and HTTPS requests to be forwarded to the Nginx container. The volumes section maps three directories on the host machine to directories in the container:

./volumes/nginx is mapped to /etc/nginx/conf.d, which allows custom Nginx configuration files to be added to the container.

./volumes/certbot/conf is mapped to /etc/letsencrypt, which stores the SSL/TLS certificates generated by Certbot.

./volumes/certbot/www is mapped to /var/www/certbot, which is where Certbot writes temporary files during the certificate renewal process.

The certbot service uses the official Certbot Docker image. It also maps the same volumes as the nginx service. The entrypoint section specifies a shell command that is executed when the container starts up. This command runs a loop that sleeps for 12 hours before evaluating the renewal of SSL/TLS certificates using Certbot.

Now let’s see how each service is configured.

Nginx

Below you’ll’ find an nginx configuration file that sets it up as a load balancer and reverse proxy for the thomasvilhena.com domain:

### Nginx Load Balancer

upstream webapi {
	server 10.0.0.10;
	server 10.0.0.11;
	server 10.0.0.12; down;
}

server {
	listen 80;
	server_name localhost thomasvilhena.com;
	server_tokens off;
		
	location ^~ /.well-known/acme-challenge/ {
		default_type "text/plain";
		alias /var/www/certbot/.well-known/acme-challenge/;
	}
		
	location = /.well-known/acme-challenge/ {
		return 404;
	}
		
	location / {
		return 301 https://thomasvilhena.com$request_uri;
	}
}


server {
	listen 443 ssl http2;
	server_name localhost thomasvilhena.com;
	server_tokens off;

	ssl_certificate /etc/letsencrypt/live/thomasvilhena.com/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/thomasvilhena.com/privkey.pem;
	include /etc/letsencrypt/options-ssl-nginx.conf;
	ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

	location / {
		proxy_pass http://webapi;
		proxy_set_header Host $http_host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-NginX-Proxy true;
		proxy_redirect off;
	}
}

The “upstream” section defines a group of servers to be load balanced, with three sample servers listed (10.0.0.10, 10.0.0.11, and 10.0.0.12). One server is marked as “down” which means it won’t receive requests.

The first “server” block listens on port 80 and redirects all requests to the HTTPS version of the site. It also includes some configuration for serving temporary files over HTTP which are required for the SSL certificate renewal process through Let’s Encrypt.

The second “server” block listens on port 443 for HTTPS traffic and proxies requests to the defined “upstream” group of servers. The “location /” block specifies that all URLs will be proxied. The various “proxy_set_header” directives are used to set the headers needed for the upstream servers to function correctly.

Certbot

Certbot requires two configuration files:

/volumes/certbot/conf/options-ssl-nginx.conf contains recommended security settings for SSL/TLS configurations in Nginx. Here’s a sample content:

ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;

ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";

/volumes/certbot/conf/ssl-dhparams.pem contains Diffie-Hellman parameters used for SSL/TLS connections. It is generated by running the following command:

openssl dhparam -out /etc/letsencrypt/ssl-dhparams.pem 2048

Here’s a sample content:

-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEA3r1mOXp1FZPW+8kRJGBOBGGg/R87EBfBrrQ2BdyLj3r3OvXX1e+E
8ZdKahgB/z/dw0a+PmuIjqAZpXeEQK/OJdKP5x5G5I5bE11t0fbj2hLWTiJyKjYl
/2n2QvNslPjZ8TpKyEBl1gMDzN6jux1yVm8U9oMcT34T38uVfjKZoBCmV7g4OD4M
QlN2I7dxHqLShrYXfxlNfyMDZpwBpNzNwCTcetNtW+ZHtPMyoCkPLi15UBXeL1I8
v5x5m5DilKzJmOy8MPvKOkB2QIFdYlOFL6/d8fuVZKj+iFBNemO7Blp6WjKsl7Hg
T89Sg7Rln2j8uVfMNc3eM4d0SEzJ6uRGswIBAg==
-----END DH PARAMETERS-----

That’s it, now you just need to docker-compose up and your reverse proxy should be up and running ✅

Apr 10, 2023 - The burden of complexity

Complexity is present in any project. Some have more, some have less, but it’s always there. The manner in which a team handles complexity can pave the way for a project’s success or lead towards its technical demise.

In the context of software, complexity arises from a variety of factors, such as complicated requirements, technical dependencies, large codebases, integration challenges, architectural decisions, team dynamics, among others.

When talking to non-technical folks, especially those not acquainted with the concepts of software complexity and technical debt, it can be helpful to present the topic from a more managerial perspective.

So I propose the following qualitative diagram that regards complexity as an inherent property of a software project, and simultaneously, a responsibility that a software development team must constantly watch and manage for being able to deliver value in the long run:

surface

From the diagram:

  • The Complexity Burden curve represents the theoretical amount of effort necessarily spent servicing complexity, as oposed to productive work. This is an inevitable aspect of software development and can manifest in various forms, including spending time understanding and working with complex code, encountering more intricate bugs and errors, updating depencencies, struggling to onboard new team members due to excessively elaborate designs, among others.

  • The Team’s Capacity line is the maximum amount of effort the team is able to provide, which varies over time and can be influenced by factors such as changes in the product development process, team size, and efforts to eliminate toil [1]. Additionally, reductions in the complexity burden of a project can unlock productivity, influencing the team’s capacity as well.

  • The Complexity Threshold represents the point where the team’s capacity becomes equal to the complexity burden. In this theoretical situation, the team is only allocating capacity towards servicing complexity. Value delivery is compromised.

With these definitions in place, let’s review the two colored zones depicted in the diagram.

The Improvment Zone

Projects are typically in the improvement zone, which means that the team has enough capacity to handle the complexity burden and still perform productive work. The lower the complexity burden, the more efficient and productive the team will be in delivering results. The team can choose to innovate, develop new features, optimize performance, and improve UX. It’s worth noting that doing so may result in added complexity. This is acceptable as long as there is sufficient capacity to deal with the added complexity in the next cycles of development and the team remains continuously committed to addressing technical debt.

The Degradation Zone

A project enters the degradation zone when the team's capacity is insufficient for adequately servicing complexity, adding pressure on an already strangled project. The team will constantly be putting out fires, new features will take longer to ship, bugs will be more likely to be introduced, developers may suggest rewriting the application, availability may be impaired, and customers may not be satisfied. The viable ways out of this situation are to significantly reduce complexity or to increase capacity. Other efforts will be mostly fruitless.

Closing Thoughts

The concept of complexity burden can be a valuable tool for enriching discussions around promoting long-term value delivery and preventing a project from becoming bogged down by complexity, leaving little room for new feature development. It’s important to make decisions with a clear understanding of the complexity burden and how it may be affected.

It’s worth pointing out that if the productive capacity of a team is narrow, meaning if the proportion of the team’s capacity allocated towards the complexity burden is already too high, the team will find itself in a situation where continuing to innovate may be too risky. The wise decision then will be to prioritize paying off technical debt and investing in tasks to alleviate the complexity burden.

Even though they are related, it’s crucial to distinguish between the complexity burden and technical debt. The former materializes as the amount of (mostly) non-productive work a team is encumbered by, while the latter is a liability that arises from design or implementation choices that prioritize short-term gains over long-term sustainability [2]. A project can become highly complex even with low technical debt.

Finally, a project is a dynamic endeavor, and a team may find itself momentarily in the “degradation” zone in one cycle and in the “improvement” zone in the next. What matters most is to be aware of the technical context and plan next steps preemptively, aiming to maintain the complexity burden at a healty level.


Reference

[1] Google - Site Reliability Engineering. Chapter 5 - Eliminating Toil

[2] Wikipedia - Technical debt