Designing a feedback system for IFTTT

Keeyen
10 min readFeb 12, 2019

During my time at Small Animal Studios, we had the privilege to work with the talented team at IFTTT on multiple projects, one of them is designing a system for IFTTT’s users to give feedback to the services they use on the platform.

Context

IFTTT is an automation service and their main product helps connect different services to make them work together. They are called Applets.

There are 2 types of users on the IFTTT ecosystem

  • The first type is the majority of IFTTT’s user such as their iOS and Android app and their web app we called them the general users.
  • The second type is platform users. Platform users are people who work at the services that are on the IFTTT platform. For example, a Philips Hue employee will have to log into IFTTT’s platform site in order to manage and change anything related to their Applets etc.

My role in this project includes research, brainstorm/ideation, product design, wireframing, prototyping.

The Designs

Let’s take a look at a few shipped designs from this project. (Focus on how a user’s rate an Applet and give feedback to a service.)

Negative Feedback Flow
Positive Feedback Flow

Suggesting an Applet

This occurs in the service’s main page when the user scrolled to the bottom of the page where there might not be able to find what they want therefore there’s a section prompting to give suggestions to the service.

Why? 🤷‍♀️

Why did the project exist in the first place?

Before the team decided to work on Feedback, the team is given a goal to increase IFTTT’s monthly platform engagement. But the more we thought about it, the more we believe the engagement metric should not be the goal of success, as it will be an after effect of a good product or a good feature being implemented. Because that will more likely be the reason users come back to the site hence increasing the engagement rate. Therefore we took a step back and think from a broad view of the whole IFTTT platform and asked ourselves:

What makes IFTTT great?

The incredibly useful Applets IFTTT provides.

At that point, we are all on the same page that great quality Applets are what’s important to IFTTT. That is when we went to figure out what are the ways we can improve the quality of Applets on the platform beside the features we already have which are usage metrics, crash reports etc. We then decided to explore and work on improving the way IFTTT receive and ask for user feedback.

The Problem ⚠️

The system IFTTT had for receiving and asking for feedback was half baked as it results in generating ineffective results.

  1. Hard to navigate to the feedback section, making users less likely to give feedback.
  2. The way it asks for feedback opens up for a lot of undesired responses/less useful feedback.
  3. Only limited to giving feedback to IFTTT(the company itself) and not other services on the platform.

Why it matters to the users & the businesses?

This problem matters to everyone involved. From IFTTT the company itself to their users and even more to the services on the platform because:

  1. Helps services on IFTTT improve and make new Applets based on user feedback.
  2. Higher quality Applets makes users more productive/happy.
  3. More users will likely join IFTTT as the quality of Applets improve.

The will result in Applet quality improvement and increase number of Applets being made by services, and from there, the engagement rate will automatically come as an after effect of good quality Applets being made.

The Cycle

Here is a diagram illustrating how improving the feedback system will benefit everyone on IFTTT.

💙 IFTTT: Improve how feedback is asked and received.

💚 General Users: The improved feedback system will enable and encourage general users to give useful services on the IFTTT platform.

🧡 Platform users (services): Know what the user wants and go forward on making the improvements and create more new applets.

This will result with more quality applets being made on the IFTTT platform which makes general users are happy and join IFTTT because of that, general users also will discover new services in that process because of the quality of Applets the service provides. In the end everyone benefits. That is also how we found a goal to work towards.

The Goal

Improve the quality of Applets on the IFTTT platform.

How?

By providing ways for users to easily give useful constructive feedback to the services they are using.

Research

Before we dive into brainstorming for ideas I did research on two main topics regarding feedback.

From a perceptive of a service, what kind of feedback is useful?

Together we have found that good feedback for services is:

  • Feedback that gives context. (As it will provide the necessary context for us to understand the motivation and problem they might be facing.)
  • Feedback that states users problems or needs.
  • Feedback that is actionable.

How to properly ask for feedback?

After digging through many case studies, articles, papers we have consolidated the results into the ones that are only applicable to IFTTT’s products. From the research, we’ve gathered the results and put together these 3 pillars that good feedback prompts are based on.

1. Timing

  • Timing is important as we need to be mindful when is the right time to ask users for feedback, as an example you would as people who downloaded your app within 10mins for their feedback as they haven’t fully experienced the services to know what good or bad. Choose a moment when it is the least intrusive to ask for feedback.

2. Contextual

  • Asking feedback in the right form of context is important too as it maximizes the chances of receiving more relevant and useful feedback. For example, if an IFTTT user disabled an Applet after using awhile we could prompt feedback asking what’s wrong and we would like to know what you think about this applet you just disabled so the services can improve upon that. This mostly applies to when and where we ask for feedback.

3. Guided

  • In order to get more out of a single prompted feedback, it has to be guided as overwhelming users like a list of questions can be intimidating to them. Therefore we should always have a guided way of asking for feedbacks by easing things one by one. Guided prompts also mean we need to be specific on what we ask which will lead to better feedback in return

Doing these research gave us new knowledge about the area which opens up more questions/problems we identified. That is great because it provides us with ideas on what to focus on + what kind of questions to think about in our brainstorm session.

Brainstorm

Brainstorming helps us generate different ideas and different approaches to how might we tackle the problem.

We kept in mind the 3 pillars of what makes good feedback when we are brainstorming. Starting each question with “How might we…” gives us a nice starting point eventually we settled on a few approaches we would like to use to kick off the design.

Topics and questions we explored:

  • How might we drive users to rate Applets?
  • How might we drive users to suggest Applet ideas to services?
  • How much feedback is enough/useful?
  • What are the potential types of feedback we can ask?
  • From the perspective of a Service, what types of feedback would be most useful
  • What would an ideal “piece of feedback” be?
  • How much feedback is useful?
  • Can we work backwards from the ideal “piece of feedback” to an experience that hand-holds the user to give that feedback?

Brainstorm Result

We manage to have a list of things we would like to know from the general IFTTT user such as:

How would they rate an Applet?

If they don’t like it we would ask them what are the reason by giving them 3 choices:

  • The Applet is too slow
  • The Applet doesn’t work as intended
  • The Applet is not useful

As we found out these are the three main reasons why would have something negative to say about an Applet.

Sketches

During the sketching ideation stage, we did an audit of the whole IFTTT website to find places where we can ask for feedback. As we got a more in-depth sense of direction on what the designs could look like, we moved on to the wireframe state.

Wireframes

Rating System

We consolidated all the different ways we experimented on how an Applet can be rated into 4 distinct types.

Metrics like number 1–5 and stars, we realize it’s a bit vague since Applets are mostly a simple thing to judge, unlike a movie which fits having a 1–5 or 5-star rating system. As we don’t think it makes sense for an example, your applet is rated 2/5, like what does that mean?

We also experimented on skipping the rating and just let the user choose a specific reason, but in the end, we would we want an easy way to know how an Applet is doing overall so we decided to use a simple metric of just, like or don’t like.

At this stage, we haven’t set on a certain design decision as we think we need to fully implement the rest of the flow to determine which rating system works the best.

Rating Flow Iterations

We then dig deeper and explore the rest of the user flow. Like what happens after the user click thumbs up or down? Is that it?

Iteration 1

This iteration I did takes a simpler approach by only asking what can the service do better if the user doesn’t like the applet.

Rating flow iteration 1

Iteration 2

When the user gave an Applet a low rating, we try adding friction on the way we design the feedback forms to prevent any useless feedback. The trade-off of this approach is giving up a simple feedback flow but in return, we gain a higher chance of receiving quality feedback.

Rating flow iteration 2

We added a section to choose from one of 3 main reasons we found users to dislike an applet which are:

  • The applet is too slow
  • The applet doesn’t work as intended
  • The Applet is not useful

This gives platform users more insight on what might be wrong and also a more simple way for the general users to just pick an option they most relate to.

Iteration 3

On our final iteration we decided to use a thumbs up and down representation, as it is a universal understanding of good and bad instead of Yes and No which requires more context. Another reason we chose the thumbs up/down metric is it converts well as a metric of how good an Applet is.

Rating flow iteration 3

On our improved feedback flow, we say thank you and ask for suggestion if the user gives the Applet a thumbs up. But if the user gives a thumbs down, we instead ask them to choose a reason first and base on what the user chose we determine whether or not we should ask them for a written suggestion.

So in this case, if the user chooses “It is too slow”, there’s nothing else to say therefore we just redirect them to the “thank you” state but if the user chooses “It is not useful” or “It does not work at intended”, we then give them a written feedback option so they can further elaborate their issue.

Mockups

The mockup designs are more straight forward comparing to the wireframes as we follow IFTTT’s strict design guidelines to ensure the design feels consistent with IFTTT’s design language, compact but bold with clear obvious designs.

We explored different style for the design, ranging from compact sizes to a design that features a larger width which encourages users to write more thoughtful feedback.

Final Design

In the end, we settled with a design that is compact and mobile-friendly with a more obvious call to action button outlook that clearly shows what the user has chosen (Thumbs up/down) and what they should donext (Submit).

Results 📊

Although it is a little early to know what kind of effect user ratings and suggestions have on Applets, however, we are stoked to see the number of ratings and suggestions users submitted to the services they use.

Connecting the IFTTT community and our service customers through the feedback feature was an important first step in driving continuous improvement to the network. We’ve seen a correlation between service-owners viewing feedback, updating their Applets or services, and receiving increased positive ratings.

— Jamison, Project Manager at IFTTT

Here are some numbers…

Number of Ratings

Number of Suggestions

Includes Applet, service, trigger and action suggestions.

Credits ✨

Thank you to everyone that’s involved in this project for being wonderful to work with and also thank you to the ones that gave us helpful + insightful feedback.

Design: Small Animal Studios (Dustin Senos, Kee Yen Yeo)

Product Manager: Jamison Ross

Engineers: Esten Hurtle, Alan Ly, Chris Janik, Max Ince, Trevor Turk, Apurva Joshi (Data)

Others: Linden Tibbets, Brian Hardie, Connor McIntire

--

--