Enhancing Review automation by leveraging User Feedback

I identified an opportunity, convinced leadership, and led design to enhance and existing offering - Review Automation.

In addition to bulk automation of review requests, sellers gained the ability to schedule requests, customize which product listings to include, and access detailed performance analytics.

These enhancements propelled Review Automation to become Jungle Scout’s most successful feature of the year, achieving the highest user adoption rates.

Background

Jungle Scout provides Amazon sellers with data insights and assists with store management. In order to win back market share and increase the product's value proposition, we were tasked with identifying low-hanging fruit that could be tackled swiftly.

Jungle Scout's web application has a tool called Review Automation (RA). RA is a feature that completely automates the Seller Central review request process for Jungle Scout users. It is meant to save time and prevent users from missing an opportunity to earn a review.

Identifying the problem

We knew that Review Automation was a limited feature with only basic functionality, and it was time for and upgrade. I had an instinct that a higher level of customization could help it live up to its true potential.

My approach

  • To understand the impact Review Automation has on user churn, I collaborated with customer success and business intelligence teams. We conducted regression analysis tests on accounts that went through the churn funnel from Aug. to Nov. 2021 with a total of 13,560 leaving our platform. We learned that users with Review Automation turned off were more likely to churn.

  • To understand user requirements - I tracked over 250+ responses from ProductBoard, shadowed customer tickets and conducted six user interviews.

Execution

  • I identified that our users wanted:

    • The ability to set a custom schedule for sending requests based on products

    • To not send requests for reviews from malicious orders (from competitors)

    • To not send requests for products that already have a high positive rating

  • I set success metrics with my PM that took into consideration user and business requirements. These included:

    • Increasing the number of Amazon and Jungle Scout account syncs

    • Increasing the usage of advanced features

    • Increasing account upgrades through RA

  • Aligned stakeholders, and C-Suite executive teams on the plan of action throughout the process.

  • Led end-to-end design experience, from wireframe to high-fidelity handoff, followed by testing solutions with internal Amazon sellers.

Designing the solution

Cross-team collaboration

In my collaboration with the CX and BI teams - we found the following valuable insights.

  • 68% were non-sellers (9,215)

  • 29% were sellers with RA turned OFF (3,912)

  • 3% were sellers with RA turned ON (433)

  • Users with RA turned ON were 89% less likely to churn (monthly)

I used the data to create a case and present it to my leadership team. We won approval, created a product brief, and were off to the races!

Understanding business and user requirements

From my user research, we concluded the seller's crucial requirement to customize review requests based on their unique situation.  I categorized the collected data into three trends that functioned as guardrails for me moving forward.

To support the company's data flywheel strategy - the business wanted an increase in the number of users syncing their Amazon Seller accounts with Jungle Scout. Taking this and the user's requirements into account my PM and I crafted success metrics. I pegged on Google's HEART framework for usability themes.

Defining the bigger picture

Taking our personas into consideration - I facilitated several brainstorming sessions to map out the user flow, and journey with my PM, Manager, and Eng. lead. These live interactable sessions helped us segment various scenarios, and edge cases, and explore opportunities collaboratively.

Designing and testing the solution

Holistic view on review requests

Users can now see a holistic view on how their review requests have been performing over a period of 30-days. I used big callouts cards to provide attention to what matters most.. From the button on the upper right corner - users can access their first level of customization - time delay based on marketplace. This acts as the first step for setting a schedule.

Setting time delay on a product level

The second level of customization happens at the product level (in Product Setting). This allows users to set time delay or skip requests for individual parent products or their variants. Product Settings can be accessed from the RA Setting's modal, RA table, or nav bar.

Design, Test, Learn, Repeat

We conducted extensive internal testing and found that participants failed to perform tasks related to setting time delay on a product level. To mitigate this - we leveraged tooltips and guided copy that redirected users to the Product Settings page

As part of the handoff, I provided a detailed usability spec doc (as part of my handoff). This included different states, interactions, and behavior notes for the various components that make up the tool.

Conclusion

Project learnings and impact

We successfully released Review Automation, and internally it set a benchmark for what a sticky feature should look like.

Below are some learnings from the project:

  • Working on a feature that had several layers of logic made us realize the importance of clearly guiding the user every step of the way.

  • A multi-dependent experience needs to be approached with nuance and care. Mapping out the end-to-end user journey, early on - is essential.

  • Users pay for automating processes. Take the ‘manual’ out of the user’s life - and you’ve gotten a solution that sticks!