How Pinterest Leverages Realtime Person Actions in Advice to Enhance Homefeed Engagement Quantity | by Pinterest Engineering | Pinterest Engineering Weblog

Xue Xia, Software program Engineer, Homefeed Rating; Neng Gu, Software program Engineer, Content material & Person Understanding; Dhruvil Deven Badani, Engineering Supervisor, Homefeed Rating; Andrew Zhai, Software program Engineer, Superior Applied sciences Group

Image from https://wallpapercave.com/neural-networks-wallpapers#google_vignette — black background with turquoise grid points

On this weblog put up, we’ll display how we improved Pinterest Homefeed engagement quantity from a machine studying mannequin design perspective — by leveraging realtime person motion options in Homefeed recommender system.

The Homepage of Pinterest is the certainly one of most essential surfaces for pinners to find inspirational concepts and contributes to a big fraction of general person engagement. The pins proven within the prime positions on the Homefeed must be customized to create an enticing pinner expertise. We retrieve a small fraction of the big quantity of pins created on Pinterest as Homefeed candidate pins, based on person curiosity, adopted boards, and many others. To current essentially the most related content material to pinners, we then use a Homefeed rating mannequin (aka Pinnability mannequin) to rank the retrieved candidates by precisely predicting their customized relevance to given customers. Due to this fact, the Homefeed rating mannequin performs an essential position in enhancing pinner expertise. Pinnability is a state-of-the-art neural community mannequin that consumes pin alerts, person alerts, context alerts, and many others. and predicts person motion given a pin. The excessive stage structure is proven in Determine 3.

Flow map of candidate pins going through pinnability models, becoming relevance ordered, then to Homefeed

The Pinnability mannequin has been utilizing some pretrained person embedding to mannequin person’s curiosity and desire. For instance, we use PinnerFormer (PinnerSAGE V3), a static, offline-learned person illustration that captures a person’s long run curiosity by leveraging their previous interplay historical past on Pinterest.

Nonetheless, there are nonetheless some facets that pretrained embeddings like PinnerSAGE doesn’t cowl, and we are able to fill within the hole by utilizing a realtime person motion sequence function:

  • Mannequin pinners’ short-term curiosity: PinnerSAGE is skilled utilizing 1000’s of person actions over a long run, so it largely captures long-term curiosity. Alternatively, realtime person motion sequence fashions short-term person curiosity and is complementary to PinnerSAGE embedding.
  • Extra responsive: As an alternative of different static options, realtime alerts are in a position to reply quicker. That is useful, particularly for brand spanking new, informal, and resurrected customers who wouldn’t have a lot previous engagement.
  • Finish-to-end optimization for advice mannequin goal: We use a person motion sequence function as a direct enter function to the advice mannequin and optimize immediately for mannequin goals. In contrast to PinnerSAGE, we are able to attend the pin candidate options with every particular person sequence motion for extra flexibility.

As a way to give pinners real-time suggestions to their latest actions and enhance the person expertise on Homefeed, we suggest to include the realtime person motion sequence sign into the advice mannequin.

A steady, low latency, realtime function pipeline helps a strong on-line advice system. We serve the most recent 100 person actions as a sequence, populated with pin embeddings and different metadata. The general structure will be segmented to occasion time and request, as proven in Determine 2.

at event time, rockstore stores information from Kafka log via NRT/Flink App Materializer. At request time, HF logging/serving request go through Unity HF, USSv2 Aggregator, USSv2 view, then stored in rockstore and transform into merged UFr

To attenuate the applying downtime and sign failure, efforts are made in:

ML aspect

  • Options/schema drive validation
  • Delayed supply occasion dealing with to stop knowledge leakage
  • Itemized actions monitoring over time knowledge shifting

Ops aspect

  • Stats monitoring on core job well being, latency/throughput and many others.
  • Complete on-calls for minimal software downtime
  • Occasion restoration technique

We generated the next options for the Homefeed recommender mannequin:

Headers: Feature Name & Description.  pin EngagementActionTypeSequence — Users’ past 100 engagement actions (e.g. repin, click, hide, etc) pinEngagementEmbeddingSequence — Users’ past 100 engagement pins’s pinSAGE embedding pinEngagementTimestampSequence — The timestamp of users’ past 100 engagement

Determine 3 is an outline of our Homefeed rating mannequin. The mannequin consumes a <person, pin> pair and predicts the motion that the person takes on the candidate pin. Our enter to the Pinnability mannequin contains alerts of assorted varieties, together with pinner alerts, person alerts, pin alerts, and context alerts. We now add a singular, realtime person sequence alerts enter and use a sequence processing module to course of the sequence options. With all of the options remodeled, we feed them to an MLP layer with a number of motion heads to foretell the person motion on the candidate pin.

Diagram of Pinterest Homefeed Ranking (Pinnabilty) Model Architecture

Latest literature has been utilizing transformers for advice duties. Some mannequin the advice downside as a sequence prediction process, the place the mannequin’s enter is (S1,S2, … , SL-1) and its anticipated output as a ‘shifted’ model of the identical sequence: (S2,S3, … , SL). To maintain the present Pinnability structure, we solely undertake the encoder a part of these fashions.

To assemble the transformer enter, we utilized three essential realtime person sequence options:

  1. Engaged pin embedding: pin embeddings (discovered GraphSage embedding) for the previous 100 engaged pins in person historical past
  2. Motion sort: sort of engagement in person motion sequence (e.g., repin, click on, cover)
  3. Timestamp: timestamp of a person’s engagement in person historical past

We additionally use candidate pin embedding to carry out early fusion with the above realtime person sequence options.

initial architecture of user sequence transformer module

As illustrated in Determine 3, to assemble the enter of the sequence transformer module, we stack the [candidate_pin_emb, action_emb, engaged_pin_emb] to a matrix. The early fusion of candidate pin and person sequence is proved to be essential based on on-line and offline experiments. We additionally apply a random time window masks on entries within the sequence the place the actions had been taken inside someday of request time. The random time window masks is used to make the mannequin much less responsive and to keep away from variety drop. Then we feed it right into a transformer encoder. For the preliminary experiment, we solely use one transformer encoder layer. The output of the transformer encoder is a matrix of form [seq_len, hidden_dim]. We then flatten the output to a vector and feed it together with all different options to MLP layers to foretell multi-head person actions.

In our second iteration of the person sequence module (v1.1), we made some tuning on prime of the v1.0 structure. We elevated the variety of transformer encoder layers and compressed the transformer output. As an alternative of flattening the complete output matrix, we solely took the primary 10 output tokens, concatenated them with the max pooling token, and flattened it to a vector of size (10 + 1) * hidden_dim. The primary 10 output tokens seize the person’s most up-to-date pursuits and the max pooling token can symbolize the person’s long run desire. As a result of the output dimension turned a lot smaller, it’s inexpensive to use an specific function crossing layer with DCN v2 structure on the complete function set as beforehand illustrated in Fig.2.

Improved architecture of user sequence transformer module (v1.1)

Problem 1: Engagement Fee Decay

By on-line experiments, we noticed the person engagement metrics step by step decayed within the group with realtime motion sequence therapy. Determine 6 demonstrates that for a similar mannequin structure, if we don’t retrain it, the engagement achieve is way smaller than if we retrain the mannequin on contemporary knowledge.

Chart of Homefeed Repin Volume Increase change by time. Blue line represents retrained model. Red line represents fixed model.

Our speculation is that our mannequin with realtime options is kind of time delicate and requires frequent retraining. To confirm this speculation, we retrain each the management group (with out realtime person motion function) and the therapy group (with realtime person motion function) on the identical time, and we evaluate the impact of retraining for each fashions. As proven in Determine 6, we discovered the retraining advantages within the therapy mannequin way more than within the management mannequin.

Chart of Overall repin gain of sequence model retrain and control model retrain across day 0 to day 11

Due to this fact, to sort out the engagement decay problem, we retrain the realtime sequence mannequin twice per week. In doing this, the engagement charge has develop into way more steady.

Problem 2: Serving Massive Mannequin at Natural Scale

With the transformer module launched to the recommender mannequin, the complexity has elevated considerably. Earlier than this work, Pinterest has been serving the Homefeed rating mannequin on CPU clusters. Our mannequin will increase CPU latency by greater than 20x. We then migrated to GPU serving for the rating mannequin and are in a position to preserve impartial latency on the identical value.

On Pinterest, one of the vital essential person actions is repin, or save. Repin is among the key indicators of person engagement on the platform. Due to this fact, we approximate the person engagement stage with repin quantity and use repin quantity to judge mannequin efficiency.

Offline Analysis

We carry out offline analysis on totally different fashions that course of realtime person sequence options. Particularly, we tried the next architectures:

  • Common Pooling: the only structure the place we use the common of pin embedding in person sequence to current person’s quick time period curiosity
  • (Convolutional Neural Community (CNN): makes use of CNN to encoder a sequence of pin embedding. CNN is appropriate to seize the dependent relationship throughout native info
  • Recurrent Neural Community (RNN): makes use of RNN to encoder a sequence of pin embedding. In comparison with CNN, RNN higher captures long run dependencies.
  • Misplaced Brief-Time period Reminiscence (LSTM): makes use of LSTM, a extra refined model of RNN that captures longer-term dependencies even higher than RNN by utilizing reminiscence cells and gating.
  • Vanilla Transformer: encodes solely the pin embedding sequence immediately utilizing the Transformer module.
  • Improved Transformer v1.0: Improved transformer structure as illustrated in Determine 4.

For Homefeed floor particularly, two of a very powerful metrics are HIT@3 for repin and conceal prediction. For repin, we attempt to enhance the HIT@3. For cover, the objective is to lower HIT@3.

Headings: Model, hide, repin. Average Pooling -1.61% 0.21% CNN -1.29% 0.08% RNN -2.46% -1.05% LSTM -2.98% -0.75% Vanilla Transformer -8.45% 1.56% Improved Transformer v1.0 -13.49% 8.87%

The offline end result exhibits us that even with the vanilla transformer and solely pin embeddings, the efficiency is already higher than different architectures. The improved transformer structure confirmed very sturdy offline outcomes: +8.87% offline repin and a -13.49% cover drop. The achieve of improved transformer 1.0 from vanilla transformer got here from a number of facets:

  1. Utilizing motion embedding: this helps mannequin distinguish optimistic and destructive engagement
  2. Early fusion of candidate pin and person sequence: this contributes to the vast majority of engagement achieve, based on on-line and offline experiment,
  3. Random time window masks: helps with variety

On-line Analysis

Then we carried out a web-based A/B experiment on 1.5% of the full site visitors with the improved transformer mannequin v1.0. In the course of the on-line experiment, we noticed that the repin quantity for general customers elevated by 6%. We outline the set of latest, informal, and resurrected customers as non-core customers. And we noticed that the repin quantity achieve on non-core customers can attain 11%. Aligning with offline analysis, the cover quantity was decreased by 10%.

Not too long ago, we tried transformer mannequin v1.1 as illustrated in Determine 4, and we achieved an extra 5% repin achieve on prime of the v1.0 mannequin. Conceal quantity stays impartial for v1.0.

Headings: Model Variation, Cumulative Homefeed Repin Volume (all users & non-core users) Cumulative Homefeed Hide Volume (all users).  Sequence Model V1.0 6% 10% -10% Sequence Model V1.1 + Feature Crossing 11% 17% -10%

Manufacturing Metrics (Full Site visitors)

We need to name out an attention-grabbing commentary: the net experiment underestimates the facility of realtime person motion sequence. We noticed larger achieve once we rolled out the mannequin because the manufacturing Homefeed rating mannequin to full site visitors. It is because the training impact of optimistic suggestions loop:

  1. As customers see a extra responsive Homefeed, they have an inclination to interact with extra related content material, and their habits modified (for instance, extra clicks or repins)
  2. With this habits change, the realtime person sequence that logs their habits in realtime additionally shifted. For instance, there are extra repin actions within the sequence. Then we generate the coaching knowledge with this shifted person sequence function.
  3. As we retrain the Homefeed rating mannequin with this shifted dataset, there’s a optimistic compounding impact that makes the retrained mannequin extra highly effective, thus, a better engagement charge. This then loops us again to 1.
Diagram of feedback loop of Realtime Sequence Model: 1. User behavior change: User’s behavior changed as they see more responsive recommendations Leads to 2. Training data change -User action sequence feature itself changed — More repin actions in training data then leads to 3. Ranking model improved — model is retrained on latest dataset — predicts user action more accurately — higher engagement Then loop back to 1

The precise Homefeed repin quantity improve that we noticed after delivery this mannequin to manufacturing is larger than on-line experiment outcomes. Nonetheless, we won’t disclose the precise quantity on this weblog.

Our work to make use of realtime person motion alerts in Pinterest’s Homefeed recommender system has tremendously improved the Homefeed relevance. Transformer structure seems to work greatest amongst different conventional sequence modeling approaches. There have been numerous challenges alongside the best way and are non-trivial to sort out. We found that retraining the mannequin with realtime sequence is essential to maintain up the person engagement. And that GPU serving is indispensable for big scale, advanced fashions.

It’s thrilling to see the large achieve from this work, however what’s extra thrilling is that we all know there’s nonetheless way more room to enhance. To proceed enhancing Pinner expertise, we’ll work on the next facets:

  1. Characteristic Enchancment: We plan to develop a extra fine-grained realtime sequence sign that features extra motion varieties and motion metadata.
  2. GPU Serving Optimization: That is the primary use case to make use of GPU clusters to serve giant fashions at natural scale. We plan to enhance GPU serving usability and efficiency.
  3. Mannequin Iteration: We’ll proceed engaged on the mannequin iteration in order that we totally make the most of the realtime sign.
  4. Adoption on Different Surfaces: We’ll attempt comparable concepts in different surfaces: associated pins, notifications, search, and many others.

This work is a results of collaboration throughout a number of groups at Pinterest. Many because of the next people who contributed to this challenge:

  • GPU serving optimization: Po-Wei Wang, Pong Eksombatchai, Nazanin Farahpour, Zhiyuan Zhang, Saurabh Joshi, Li Tang
  • Technical assist on ML: Nikil Pancha
  • Sign era and serving: Yitong Zhou
  • Quick controllability distribution convergence: Ludek Cigler

To study extra about engineering at Pinterest, take a look at the remainder of our Engineering Weblog and go to our Pinterest Labs web site. To discover life at Pinterest, go to our Careers web page.