Tips For a Better Redux Architecture: Lessons for Enterprise Scale

So you've decided to try out Redux as your application state manager. Or perhaps you're investigating better options. Good for you. With Redux the good news is your application will enjoy a productivity boost from the simplicity of knowing precisely where data logic lives. However, Redux alone cannot protect you from a fashionable, spiced-up spaghetti mess. How does it fit into a multi-tiered application composed of several orders of widgets and components that each rely on asynchronous data? In order to save yourself from this ugly monster you'll need a higher order architectural convention.

We, at HPE, were facing the same challenges when trying to build a massive React-based UI application. Redux offered a good starting convention for organizing the flow of data through a one-way, globally accessible pipeline BUT it didn't go quite far enough in describing how the application should be structured.

We'll begin with basics.

Redux + React

This article assumes the reader already knows how to use React and Redux.

React has already proven itself in both speed and its ability for headless rendering, and its wide use has ensured no end to UI engineers who can work with a codebase. Since React simplifies UI rendering by enforcing a unidirectional, top-down data-flow it makes for good pairing partners with Redux.

Redux (paired necessarily with React-Redux) is a great benefit for medium-to-large applications because it offers a convention for how data is fetched, consumed, passed from one component to another, and ultimately displayed in the UI. Allowing Redux to handle the state of the application in a singular/global context takes the guesswork out of knowing what inner-dependencies exist within a large application. With Redux handling the updates to the global application state React can focus wholly on presentation and the handling of user events through props alone.

React's own setState can certainly be used within components but should be limited as much as possible to simple, interim state that isn't critical nor likely to be used by any other piece of the application. In these cases the usage should be heavily documented as well.

The reason why Redux is used instead of local, ad-hoc changes within the presentation layer or some other centralized store where accessors can mutate state (like MobX, e.g.) is because these systems can inevitably leave data flow inconsistent across the application design translating to longer ramp-up time and more difficulty debugging. In fact, to enforce this and protect references to the state we have wrapped everything in Immutable.

For more context of why this might cause issues consider an application with >100 containers and components that each rely on a set of robust API interdependencies.

Likewise, the use of a dispatch prop in React components is highly discouraged. Instead, action creators are used exclusively to post updates to the state so that components can remain totally agnostic to state schema design:

// bad
const Todos = ({todos, dispatch}) => {
  return (
    <TodoList todos={todos} onClick={dispatch(...)} />
  )
}

export default connect(
  state => ({todos})
)

// good
const Todos = ({todos, todoListClick}) => {
  return (
    <TodoList todos={todos} onClick={todoListClick} />
  )
}

export default connect(
  state => ({todos: state.todos}),
  {
    // Instead of using dispatch we just inject action 
    // creators into components that handle how dispatches are
    // constructed and fired.
    todoListClick: createAction(TODO_LIST_CLICK)
  }
)

The immediate benefit of constructing an application this way is the certainty that every component and its data flow is architected the exact same way. An engineer who is familiar with this pattern can then jump into another area of the application she doesn't yet have experience with to make changes with little learning curve. Furthermore, she can easily scaffold a new component module without the guesswork of what pieces to create or the How?.

A secondary but immensely important benefit is the ease of unit testing components and state management. I cannot stress enough how important it is to write unit tests for high coverage. It catches so many bugs before they make it into master. Without going into the virtues of testing suffice it to say it becomes important to make testing as easy and fluid as possible for team members, and this is easiest when presentation and state management concerns are kept as separated as possible.

Modules as a Separation of Concerns

Continuing in this same philosophy, we have embraced an architecture very similar to Redux Ducks and Reactizer. The idea is to keep the application as decentralized as possible by allowing each Module to be responsible for its own feature requirements while at the same time keeping its data in the global store.

New feature modules can be added at any time to extend the application and older features can be upgraded without extensive changes across the app.

Each Module contains the following units:

  • Module:
    • Module File (index.js) contains most of the module's Redux code.
    • Selectors are simple getter functions used to select data from the state.
    • Routes contains the Module's React-Router configuration that is consumed at the top App level. (See System.import usage.)
  • Container: The smart, top rendering class that injects all props from the module file including state and action creators. Any intermediate logic such as filtering, event handler logic, or mounting logic is done in its methods.
  • Components: Dumber individual classes or stateless functions that render specific pieces of the UI based on props from the container.
  • Elements: Super dumb, stateless functions that are used to keep presentation DRY. These do not call anything on props.
  • Tests: Jest unit testing belongs to each Module as well.

An example file structure might look like:

Todos
├── Components
│   ├── TodoList.jsx
│   └── TodoList.test.jsx
├── Elements
│   ├── TodoIcon.jsx
│   └── TodoIcon.test.jsx
├── TodoContainer.jsx
├── TodoContainer.test.jsx
└── module
    ├── index.js
    ├── routes.js
    ├── selectors.js
    └── todos.module.test.js

We then structure the contents inside the Module's files with the same format for every Module of the application, enforcing uniformity. A typical module/index.js file might look like this:

import {fromJS} from 'immutable'
import {createAction} from 'utils'

// Action Types are namespaced since they are global
export const SHOW_TODOS = 'todos/SHOW_TODOS'
export const NEW_TODOS = 'todos/NEW_TODOS'
export const ADD_TODO = 'todos/ADD_TODO'

// Action Creators are in the same order as action types
export const todos = {
  showTodos: createAction(SHOW_TODOS),
  newTodos: createAction(NEW_TODOS),
  addTodo: createAction(ADD_TODO)
}

// Initial state is always defined
export const initialState = fromJS({
  todoResults: {
    count: 0,
    start: 0,
    page: 0,
    items: []
  },
  isFetching: false,
  isVisible: false,
  filterBy: null
})

// Async flow control goes here...
// More on this below.

// Reducer function is last, exported as default and 
// will be used with in an App modules file combineReducers
export default function todosReducer(state = initialState, {type, payload}) {
  switch (type) {
    case SHOW_TODOS:
      return state.set('isVisible', true)

    case NEW_TODOS:
      return state
        .set('isVisible', true)
        .set('filterBy', 'new')

    case ADD_TODO:
      return state.set('isFetching', true)

    defualt:
      return state
  }
}

As you might guess, when every module conforms to this pattern it gives us the benefit of knowing exactly where all of the elements of our large and complex application live without sacrificing the agility of using open-sourced libraries instead of a monolithic framework.

If you have experience with Redux you might by now be wondering how asynchronous actions are handled, like adding a new todo that involves POSTing to an API server and receiving a response. Redux has no opinion about how these action sequences are preformed but of all the available addons out there we have found great value in Redux-Saga.

Redux-Saga for Async Flow Control

Any modern application is going to have asynchronous actions. As such, we don't want these actions to block our application as we wait for them to resolve. In fact, we want these actions to spin off "side effects" that can run in sub-processes that can then later report outcomes back to our store. This is where Redux-Saga comes into play.

Redux-Saga is a coroutine runner (a feature sorely missing from native Javascript) that wraps generator functions called sagas. These functions can yield out declarative effects, promises, other sagas, or other types that are automatically handled. Those computed values are then injected back into the saga for us effectively turning asynchronous code into linear blocks.

Saga effects are really just object descriptors defined by Redux-Saga that are generated by factories and are interpreted by the coroutine runner to produce effects. To show a simple example:

// in module/index.js
import {fork, take, put} from 'redux-saga/effects'

function* todosSaga () {
  yield [
    fork(addTodoSaga)
  ]
}

function* addTodoSaga () {
  while (true) {
    const {payload} = yield take(ADD_TODO)
    const {body} = yield api.todos.add(payload)
    yield put(todos.addTodoSuccess(body))
  }
}

In this very simple example the todosSaga generator would be mounted on module load with Redux-Saga's runner. The yielded array of effects would then run each one concurrently. In this case, we are "forking" the addTodoSaga which would then run concurrently alongside any other forked sagas.

Since generator functions can be paused on blocking yield statements the infinite while loop just acts as a "keep alive". The yielded take effect instructs Redux-Saga to wait until an action with type ADD_TODO is dispatched at which point the action object is injected back into the saga and captured. Our example api would then call a method that returns a pending Promise (e.g., from fetch), which our runner understands and waits to resolve before injecting the resolved value back into the saga. Finally, the put effect instructs the runner to dispatch or "put" an action back onto the state, which in this case is created by an action creator.

Again, this example is very simplistic and doesn't handle error handling (such as if fetch rejected) but it illustrates the potential power that coroutine runners can afford engineers by simplifying otherwise complex asynchronous flows into sync-flowing processes. As an added bonus, since Redux-Saga uses declarative effects unit testing becomes that much easier since apis no longer need to be mocked. To see more examples be sure to check out the Redux-Saga docs.

So, putting it all together our previous example module/index.js file might look something like:

import {fork, take, put} from 'redux-saga/effects'
import {fromJS} from 'immutable'
import {createAction} from 'utils'

// Action Types are namespaced since they are global
export const SHOW_TODOS = 'todos/SHOW_TODOS'
export const NEW_TODOS = 'todos/NEW_TODOS'
export const ADD_TODO = 'todos/ADD_TODO'
export const ADD_TODO_SUCCESS = 'todos/ADD_TODO_SUCCESS'
export const ADD_TODO_FAILURE = 'todos/ADD_TODO_FAILURE'

// Action Creators are in the same order as action types
export const todos = {
  showTodos: createAction(SHOW_TODOS),
  newTodos: createAction(NEW_TODOS),
  addTodo: createAction(ADD_TODO),
  addTodoSuccess: createAction(ADD_TODO_SUCCESS),
  addTodoFailure: createAction(ADD_TODO_FAILURE)
}

// Sagas
function* todosSaga () {
  yield [
    fork(addTodoSaga)
  ]
}

function* addTodoSaga () {
  while (true) {
    const {payload} = yield take(ADD_TODO)
    const {body, error} = yield api.todos.add(payload)

    if (body) {
      yield put(todos.addTodoSuccess(body))
    } else {
      yield put(todos.addTodoFailure(error))
    }
  }
}

// Initial state is always defined
export const initialState = fromJS({
  todoResults: {
    count: 0,
    start: 0,
    page: 0,
    items: []
  },
  isFetching: false,
  isVisible: false,
  filterBy: null,
  error: null
})

// Reducer function is last, exported as default and 
// will be used with in an App modules file combineReducers
export default function todosReducer(state = initialState, {type, payload}) {
  switch (type) {
    case SHOW_TODOS:
      return state.set('isVisible', true)

    case NEW_TODOS:
      return state
        .set('isVisible', true)
        .set('filterBy', 'new')

    case ADD_TODO:
      return state.set('isFetching', true)

    case ADD_TODO_SUCCESS:
      return state.updateIn(['todoResults', 'items'], items =>
          items.push(payload.items)
        )
        .setIn(['todoResults', 'count'], payload.count)
        .set('isFetching', false)

    case ADD_TODO_FAILURE:
      return state
        .set('error', payload)
        .set('isFetching', false)

    defualt:
      return state
  }
}

Each Module, on load, forks the main module saga and combines the reducer to Redux's store. The rest of the Module takes care of itself.

Of course, the amazing benefit of using Redux's middlewares still applies here so any shared store logic that should extend to more than a single Module should naturally go into the App's middlewares. (In particular, look for time-traveling, state persistence, authentication services, route pushing, and more.)

Areas for Improvement

While this setup has proven easy to grasp and extend upon within our large application it does have some areas for improvement.

Specifically, we are currently passing every prop needed down from the Container level. These props are either defined in the Container or injected via the connect HoC. While this makes organization simple it also means every change or addition of a prop equates to changes to every JSX component in the tree. This gets tiring.

There's also little agreement about what constitutes "common" or "global" components that can be shared across the application. Where should these go.

One must be studious with imports in this kind of setup. If an action (or action creator) is required in another Module and the entire module/index.js is imported for this purpose one might find they have inadvertently imported most of that entire Module.

Lastly, this sort of set up does tend to yield to a lot of duplicated bootstrapping code for each Module. While I'm sure this could be alleviated with a little ingenuity and forethought doing so might also suffer from the same kind of abstractions that we've been striving to avoid.

Conclusion

I hope you have been inspired to use Redux in a slightly different and specific way knowing that the benefits of organization really does evaluate to gained momentum. Have fun, push the boundaries, and as always, learn something new.

Happy Coding!

Write your comment…

10 comments

If you were using redux-define you could reduce the boilerplate of defining action types.

Especially when using status suffixes like ERROR or SUCCESS, it can make a big difference.

Turning your example

export const ADD_TODO = 'todos/ADD_TODO'
export const ADD_TODO_SUCCESS = 'todos/ADD_TODO_SUCCESS'
export const ADD_TODO_FAILURE = 'todos/ADD_TODO_FAILURE'

Into

export const ADD_TODO = defineAction('ADD_TODO', [SUCCESS, FAILURE], 'todos');
1 Beer1
High Five1
Cool1
Show all replies

Yes it's mine. I should have mentioned that. I use it with redux-actions and redux-saga like the example in the readme. And really like the now obvious separation between actions and status updates. While having less boilerplate.

Reply to this…

Hashnode is building a friendly and inclusive dev community. Come jump on the bandwagon!

  • 💬 A beginner friendly place

  • 🧠 Stay in the loop and grow your knowledge

  • 🍕 >500K developers share programming wisdom here

  • ❤️ Support the growing dev community!

Register ( 500k+ developers strong 👊)

Very nice post @michaelgilley I'm implementing a similar approach you proposed for my PoC.

Clap1

This all can be done much simpler and more declarative than Redux Saga. You have to look at Cerebral's signals and it's excellent function tree, it can be used not only in Cerebral, but also in Redux or Mobx.

Clap1

Thanks for the tip @hipertracker. I haven't looked at cerebral yet. I'll have to read more about it and see how it compares.

Reply to this…

For importing module across features i like Jack hsu (http://jaysoo.ca/2016/02/28/organizing-redux-application/) implementation.

Yes, that scheme is very close to Ducks as well and there are several different ways to stash the modular components of a Redux app. Perhaps the most important is uniformity in design across modules. Also, while I do like the separation of each type into separate files this would also have the disadvantage of even more boilerplate code/structure to manage per module. Thanks for the post!

Reply to this…

Thanks for the great post Michael. What if two different modules have common components? How would you organize that?

Thank you @cdharrison. If you have common components (for example forms or a common header) you'd move those out to their own module. We have a few of those global modules like FormElements that don't need a module dir/file but live at the root level anyway and are imported where needed. In these cases the root App module does not import anything from these modules. Great question!

Reply to this…

Load more responses