Transforming marketing planning and collaboration

TL;DR

My role:
Product design (UX, UI, Research), Product (managing backlog, framework, prioritisation, sprint planning)
The initial brief: Build an MVP for BETA launch, validate and expand understanding of requirements
Solution: A marketing planning tool with requirements for common social channels, which allows branching and version control
Timeframe: 10 weeks to BETA, 2 months full-time post-launch, 3 months part-time
Impact: Good feedback about the product, still looking for product/market fit and how the solution sits with current COVID business objectives.

CONTEXT

I joined Actioncy late in January 2020 for four months full-time, and three months part-time (COVID times :P). I replaced another designer who had left the project a month or so before.

I was brought in as a product designer, working with the founder, and a full-stack developer. Beyond research and design, I also adopted responsibilities which would generally sit under product management such as sprint planning, roadmap organisation and leading retros.

It was an exciting time for the founder, having gone through an acquisition by Apple at a previous company, he was looking for a new challenge. At the time of joining, there was a simple brand, marketing site and one pillar of the product; the directory. The core features for the idea were chosen following previous validation. My brief was to deliver a product with those features (with refinement) for a BETA release, whilst validating their utility.

Other pieces of work during my time with actioncy included:
- a design strategy sprint
- developing the brand
- redesigning and building the marketing site in Webflow.

Day one challenges
- Taking over from another designer - what work should be carried forward?
- The gap between designers created pressure to hit the ground running on a technical problem
- Determined scope - to what extend could / should these be challenged?
- Business case - should I be looking at this also?
- Conducting research with the fixed scope in mind - looking to maximise learnings but also test the hypothesis created by these features

What we set out to achieve

Primary  objective 

Deliver a series of features which respond to  “How might we make marketing planning better?"

(for marketing managers, brand managers)

Secondary objectives
- Understand the problems which the features had set out to solve
- Validate the chosen scope and feature set
- Understand problems faced with marketing workflows and planning, communication between stakeholders, and current planning tools.
- Formalise and validate future ideas and pain points for post-BETA.
‍- Develop a framework around 'product' such as an implementation plan, roadmap, backlog and priortisation framework

ProJECT STAGEs

We set out to deliver the first release in 10 weeks. This case study is grouped into separate stages, but in reality, the gap between research, design and implementation was much closer. 

There was an existing user accounts model which we used, and some of the more simple journeys were front-loaded to get the build moving, buying time for more complex journeys which required more validation.

The work itself

Kick off and research

Objectives:

- Using the build requirements as a reference, validating the hypothesis, and increase understanding the pain points they were solving.
- Increase understanding of the marketing planning process and the issues faced by the stakeholders.
- Ongoing validation designs for key journeys


Method:

• Conducting face-to-face (and increasingly remote) qualitative interviews and testing sessions with marketing managers and more senior marketing leaders. 'Testing sessions' were mixed between what I would consider 'usability tests' and also 'conceptual feedback' for more early-stage ideas.

• Moderated and unmoderated testing (using maze.design)

• Mocking up quick designs and prototypes in Figma to test and evolve the ideas.

• Developing a  range of research artefacts to be used internally as references and living documents.
• Working in weekly sprints developing, validating and iterating on the solution

• Due to the timeframe, I took very early stage mock-ups to even the first sessions.

Qualitative research

In the build-up to the BETA release, I conducted about 20 sessions with a range of hands-on marketers and leaders. The sessions were structured so:

In terms of the design, I broke the participants into two camps, starting with what we needed to build first.

1. Organisation Campaign and marketing plan creation

2. Marketing plan submissions and reviews


Example key questions
Can you tell me about the last time you planned marketing activity?
What are the most challenging aspects of planning marketing activity?
What tools are you currently using? What has your experience been like using them?
What are the biggest challenges you face when collaborating with clients / partners?

The interview component of the sessions was very insightful, providing many rich stories supporting the hypothesis - planning and coordinating marketing activity can be a messy process, particular for larger organisations and those working with many partners.

There were also many valid challenges to the idea and the designs which were put to the participants.

Summary of problems

A graph showing the moments and events between marketing planning and delivery, which cause the most friction to the people we spoke to.

We would be focussing on communication a delivery pain with our BETA hypothesis

Overlap with features in scope

Many of the problems uncovered during the interviews were covered by features set out in the requirements, and the opportunity created by the product to overcome the painpoints.


Personas
From the sessions we starting refining personas for the two roles we were designing for. The primary user considering the BETA scope was the marketing manager / brand manager a.k.a "The hands on marketer".

The other user who would benefit we coined the "marketing leader".

Click on the images below for more detail, which show some of the planning and synthesis which took place at this stage
No items found.

Design part 1 - Intro and creating a campaign


Intro

The biggest challenge I faced during this phase was trying to shift the team's mentality away from "This feature needs to do X and X and X". In some ways, it was over-engineered, as so much thought had been given to the requirements before my arrival. Balancing objective validation with a fixed scope was a constant challenge.

I was playing catchup, and ultimately trying to deliver enough of a product which could be used by teams and individuals for work, but would also be keeping to the M of MVP.

The other challenge I faced, was breaking down the build and getting the developer working on safer areas (which require LESS testing and validation) such as "creating a campaign".

This would buy me time whilst I worked and iterated upon more complex areas such as the timeline and submissions.



The vision: Creating the new agile marketing

- Marketing plans can be created quickly and provide up-to-date requirements for common marketing channels
- Partners can work independently of eachother and branch off of the master plan
- Marketing managers can maintain control and approval, merging submissions when they are happy.
- Github for marketing :P

Initial scope

At a high-level the initial scope can be summarised as the ability to:


Here is a snapshot, providing a little more detail here in Productboard. The first milestone was the BETA release, and we then switched to quarterly objectives.



For the purpose of the case study, I'll omit some of the more basic areas of the product (settings, profile, invitations) and focus on three journeys and some of the pivots and challenges we faced. I will look at:

1. Creating a campaign and populating it (with keybeats and channel specific actions)
2. Submitting the campaign
3. Reviewing submissions


Creating a campaign and populating it

The user flow and map below show the steps taken by a user to create a campaign, and add a keybeat to it. i.e.

As a marketing manager, I want to create a campaign for my upcoming release, so I can have a workspace to plan all related activity.

Then

As a marketing manager, I want to add the main keybeats to timeline so I have a record of when the main phases occur, so I can add further actions to support them.

Both the creation of the campaign, and the addition of the keybeats and actions, create one space for everyone to collaborate on the plan and work to one "source of truth. The idea being that with pre-populated requirements, and versioning, it will dramatically improve on emails and spreadsheets and the last minute panic, looking for up-to-date requirements.

I had three questions pushing my design throughout this phase:

1. How might we help people populate campaigns as quickly as possible?

2. How much detail do people need to plan? How much of their workflow can we accomodate?

3. How can we accomodate different workflows?


Throughout the evolution of this journey, and testing, we uncovered many problems, which helped us iterate quickly. As mentioned previously, the testing was done in two ways:


1. Conceptual - the prototype was used to facilitate conversation and test general utility and understanding of the product

2. Usability testing - task-based scenarios to help us understand the gaps in design or conceptual challenges. An example of this is below from this journey and submissions:



Findings




Showing an example of iteration - the timeline

This was the very first timeline mockup used in early testing and conversations. This shows a single day view of the timeline, which contains a list of scheduled actions and the prominent action button allowing the user to add more.



Iteration throughout the ages (from top left to bottom right)

A range of concepts were tried, the most notable change being the switch from the side to the top nav and having to remove the expanded view of the line item due to technical constraints (it remains in the backlog)




The version we launched with

Below you can see the master plan in view version. We wanted to distinguish between edit / view to show users when they were looking at the master plan, and when they were creating a branch off it and working in their own version.



Edit mode


We developed the "edit mode" layer, which became a little softer as we went through iteration. Its objective was to make it clearer to the user that they were working in a branch, rather than working directly on master. The hypothesis was that branching and submissions would be useful for larger teams with multiple partners, who needed to maintain a level of control over the approval process and master. It would also allow partners to work in their own space independently, and submit work for review when ready.



We went through the same level of iteration in all areas of the application. I'll give three more examples:

During the early stages of research, some of the less technical users found it hard to understand that they were creating a branch off master, as this only became clear when you were in dit mode, and ready to submit or save. To some, it wasn't even clear then. We build the following modal to provide context as to what was happening to users. However, because this is also where the user named the version (due to technical constraints) the modal was present every time the user made an edit.
It's something which caused friction.

It was removed, and we allowed users to name in the file within edit-mode, and included the ability to name/rename within the timeline itself. It also posed the question as to whether all teams (particularly smaller ones) would need the versioning. This would be something we would consider in the future.


Another notable change, and wasn't picked up during early testing was the overlay covering the view of the timeline. This wasn't a problem in isolated tested with dummy data, but with real campaign information, it became a prominent, cringe-worthy pain point. We fixed it by with the side panel being brought inside the timeline workspace, rather than in an overlay. This provides the user with the context they needed to make decisions when editing or shifting events.



A final example is shown below making it quicker to add actions and keybeats by introducing hotkeys, which open the side panel.


Design part 2 - Submissions and reviews




One of the leading ideas of the product and one of the main differentiators was the idea of submissions. Based on the principles of git, this would allow multiple parties to work independently from the master plan and then merge their work back in when they are ready.

Consequently, the manager maintains control of the master and only merging the submissions which they approve. It also keeps all of the work in one unified format, rather than having to deal with long email and multiple versions of excel spreadsheets.

The interaction here takes place between two roles: the submitter and the reviewer.




Branching and submissions
The below diagram illustrates the power behind git, and the USP of the product over other tools in its branching functionality allowing independent work streams which can be brought together.





I had three questions pushing my design throughout this phase:

1. How might we create a review process which is easy to understand?

2. How can we display complex conflicts to non-technical user? How much do they need to understand?

3. How can we make this work for small teams too?



The submitter

Having populated their plan, the individual is left with three options:

1. Apply to master (if they have the role based permission to do so)
2. Submit for review
3. Save as draft



1. As a marketing manager, I want to work in my own branch, but apply the changes directly to master, as I don't need anyone to review them

OR

2. As a marketing manager, I want to submit my plan to be reviewed by my senior or colleague

OR

3. As a marketing manager, save this plan as a draft, so that I can come back and finish it later


All three of the above options are present to the the user in the UI at the same time, allowing for an open workflow and the decision to be made at the time the work has come to its conclusion.

Below shows the user flow and map of these touchpoints in the application:



Here we can see in the UI the options presented to the user when they submit their plan. This area went through extensive testing to verify that the users understood the implications of each of the actions. The descriptive text was added during this time to provide a clear description.


Below we can see a version of the submissions page, towards the end of iteration. It is the screen that would be seen by all users (the copy is tailored for  an admin with approval rights) as we wanted to encourage transparency for all members of the campaign. i.e If you are in the campaign you can see what other members had submitted.


The reviewer

Upon receiving the submission, the admin is able to do three things:

- request changes
- archive the submission
- accept the submission

1. As an admin, I want to request changes to the submission, as I don't quite agree with what is being proposed.

OR

2. As an admin, I want to archive the submission as it's no longer relevant

OR

3. As an admin, I want to merge the content of this submission into the master plan, as I am happy with what is being proposed.


Below we can see the user journey and map for the touchpoints described above and the options available to the admin. You will see that the initial journey described one of the actions as "closing a submission", but through testing we changed this to archiving.

Reviewing a submission

The image below shows the individual submission view and the options available to the admin. The dialogue also shows the user that conflicts must be dealt with before the submission can be merged with master (I won't be going into detail about conflicts here). This reflects the version we went live with, although the change request functionality wasn't built, and we had tested and validated more useful versions of the UI, shown below. We needed to make the release.

The two examples below show how the design evolved from the image above, through testing and iteration. It was a delicate balance showing the user the information they needed to see what had changed (not enough above) and overwhelming them with too much information.




Accepting updates

Accepting updates was another component of the submissions process worth looking at, which proved less of a conceptual challenge during testing. To face conflicts head on, and reduce the likelihood of admins having to deal with conflicts at the submission review level, we 'encouraged' users to accept updates from master when revisiting a draft, which master had since moved beyond.



Dealing with conflicts

In the simplest form, a conflict occurs when two details of the same entry differ between different branches and cannot be merged. Either the submittee or the reviewer can be faced with dealing with these, but it keeps the master plan clean and error-free. It was a substantial conceptual challenge for users throughout, and something to this day, I think would have been a steep learning curve for the non-technical users.

Some of the variations are shown below

Post release improvements


After the initial release and the ten weeks covered partly by the above sections, we worked through a series of improvements based on feedback from our users. We worked in much quicker iterative cycles and released the minimum versions of these features to test the usage. It was during this phase I was able to have a more significant influence on the build, and the team was under less pressure as to what to deliver (the scope wasn't as fixed). All of the below features came from requests from at least five users and in some cases more.


Custom action

Provides users with the ability to custom actions to the actioncy directory of actions. They can be actions for existing channels (e.g. added to Facebook) or for new channels specific to the business (blog post), or not covered by actioncy ("new social media platform)


More views

It was always on the roadmap, but we received significant requests for the content to be displayed in more calendar views, such as the weekly one below (live) and a roll-up view between multiple campaigns (bottom-not live).



Status and links

It remains our conviction that we are not a project management tool, and much of our research indicated that marketers using these tools encounter significant frustration. However, the idea of status and ownership were key themes for our users, so we released a lightweight status tool, and also enabled space for links to be added for each asset (not shown). We are exploring integrations, which we hope will also show how the specialist and generalist tools can coexist.


Lightweight templating

This offers users a quick way to duplicate campaigns or create templates instead of creating many similar campaigns. This was a light test, instead of building a more comprehensive templating framework.


The future

The tool remains in development, with the current goal being to:

- bring real-time collaboration within plans with multiplayer
- improve communication in the platform with commenting
- integrations with project management tools
- increase speed of planning and convenience by creating blocks of regular actions (concept stage - shown below)
- creating a new view and edit layer for a more engaging front-end experience.


SUMMARY

Positive

- We delivered the MVP under difficult circumstances and tight time constraints, to improve collaborating and planning.
- We validated that the pain points exist
- The research and feedback post launch also helped to make a series of improvements
- Visual and UX feedback was positive from everyone who used it
- We have a few power users who like and support what we are doing
- For a small team of we have done well!


Negative

- We launched to smaller teams which was good for getting feedback and getting them to use it, but it also made it hard for us to get into the companies who are really tackling these problems at scale.
- We have a bit of a chicken / egg problem in that we need to enhance our reputation to get into the larger companies where they face the problems we are solving at scale, but we are currently sitting with smaller
- 20%* of signed up users return > twice
- 10% return > five times
- Validation should have been more business focussed.
- Right now, the adoption has been as hoped and we have also faced the realty of people who validated the hypothesis and pain points not wanting to solve them when given the opportunity! "I can get by with being messy and making mistakes" is a real quote! Humans!

*obfuscated


Assumptions regarding lower-than-hoped-for engagement

1. People being unwilling to switch tools in the middle of campaigns
2. After planning campaigns, users tend not to use the tool that frequently, unless something changes
3. COVID shifting business priorities
4. Launching to smaller companies also meant we are dealing with people with smaller marketing campaigns, and expertise and therefore are planning less
5. All of the companies we launched with work in gaming, which was experiencing an all-time boom, and thus less concerned with optimisation
6. Difficulty in getting in front of decision-makers in organisations, despite buy in at mid-level.


CONCLUSION

My time spent was actioncy was successful on a personal level, and it was a positive experience working with the team. I'm happy with having delivered on the brief, executed a research-driven MVP with tight constraints and set up a product framework which would help the product grow and sustainably build features. Client feedback was good, as was feedback around the design. We are still searching for the Product / Market fit at actioncy as it sits a little too in the "nice to have" category.

Writing this one week after the conclusion of the 'future proposition' strategy sprint (not shown in this case study) I can't help but have mixed feelings. Launching a start-up during COVID-19 was never going to be easy. As a team, we were all consistently surprised by the difficulties experienced in sales and marketing conversations, and also product engagement. For the first time in my career, I found it hard to get meaningful feedback post-launch. Every piece of feedback and every interview was earned.

I think, in part, my experience was also part of the "designers curse", which is accentuated even more as a contractor. You don't (fully) choose the brief. You don't choose the industry or business opportunity. Over time, you can shift these things, but building on the validated hypothesis, didn't result in a seismic shift of business activity during my time there. My ability to drive the product and research forward whilst working part-time was incredibly challenging; when what is needed was more r+d to help understand the pivot opportunity, whilst maturing the sales and marketing efforts. Going against some big players was also against us.

We were solving the problem of "How might we make Marketing Campaign planning and collaboration better", but in reality, it may not be what businesses are most concerned about in COVID times, despite the lack of efficiency and problems faced by larger teams.

What I COuld HAVE DONE better

On reflection, initially, when receiving the brief and picking up the project, I could have challenged some of the business assumptions and launch plans further. I intentionally left a certain amount of this aside, as I was tasked with researching and developing MVP to test the hypothesis. The impact of this was that I didn't support the sales and marketing efforts as much as I could have.

Another question I have is "Could we have staggered the build slightly and released a week or two earlier?" Perhaps. But it would have been at the expense of some of the collaboration, and versioning functionality which was a core part of the hypothesis, and ultimately would not have affected the medium-term success.

In the same vein, I should have stepped beyond the product and considered some of the barriers to implementation and organisational adoption, e.g. creating and validating business cases and validating their resonance with the stakeholders.

I think arriving on the project with target companies, and research participants lined up; I didn't do the best job possible in differentiating between the individuals which we were interviewing and testing with, and the individuals which the initial business validation was done with. Consequently, as an oversimplification, we rolled out to small teams, but some of the features were addressing problems faced by larger teams.

In terms of design, when I joined I was reluctant to completely wipe the slate of everything which the previous designer had done (considering it an unnecessary optimisation for MVP) and initially joining on a ten-week contract, I was perhaps too concerned with getting buy-in and collaborating on something which the team would have to carry forward (rather than pushing my own vision ad-nauseam). Consequently, we carried forward more design legacy than needed, and I had to live with designs which I think could have been better, but could have easily changed.