Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B5E83200D3C for ; Tue, 14 Nov 2017 21:02:27 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id B4365160BF4; Tue, 14 Nov 2017 20:02:27 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 82442160BD7 for ; Tue, 14 Nov 2017 21:02:26 +0100 (CET) Received: (qmail 83952 invoked by uid 500); 14 Nov 2017 20:02:20 -0000 Mailing-List: contact user-help@mahout.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@mahout.apache.org Delivered-To: mailing list user@mahout.apache.org Received: (qmail 83935 invoked by uid 99); 14 Nov 2017 20:02:20 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Nov 2017 20:02:20 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 7E80FC89E1 for ; Tue, 14 Nov 2017 20:02:19 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.681 X-Spam-Level: * X-Spam-Status: No, score=1.681 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, KAM_SHORT=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id w7-Jp4_7iDA4 for ; Tue, 14 Nov 2017 20:02:14 +0000 (UTC) Received: from mail-qt0-f182.google.com (mail-qt0-f182.google.com [209.85.216.182]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id BE2565FE2F for ; Tue, 14 Nov 2017 20:02:13 +0000 (UTC) Received: by mail-qt0-f182.google.com with SMTP id f8so28652096qta.5 for ; Tue, 14 Nov 2017 12:02:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=DPQ5iQ0osrZpuAbFPG7k5QZogdQH3BqVYYRno/46D2U=; b=IUVMUd9OBxEHcsjn4qYALXTLZz8Juv86sQIq8WkdlJgs0ejC8YmzHTxN4J9Ayr+RtD ujD7I898w4zOfbbBmMmZM7tlidXE28Oke2V60YcuyzItc5r4jO4kySmwoyZYWef7lAde 5SBKco0huu2+zg3oFqOwiLZTjsTM+I2Dytm4n0Et73UGmhvUa8KG5VMK530PoV/xJcFO LnR1Zyyjz/3ZelaOmnE5C7sQYIVp7azuW+yrssCU4RoIYiIIs7LJo2/As34uLeEU5xZx 0C46kjrxHt65x82f44zpSQwUcyMIWGKY9NbHrLV8ycSNQriMpviU69m5O3uTJJLpCp8p qOXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=DPQ5iQ0osrZpuAbFPG7k5QZogdQH3BqVYYRno/46D2U=; b=Dp8Y2q5Rst8znwR4kN59CMIh96B3l5Xc+GJaJ6QWfcOdTbNXkzB/gB4E1xypf3p8RZ B2eirUwbRcS2nKqU+BVQ3gjYCkDRV/c5NKMr60KX374b1cmDuuNsnxovZ5s8vkOVORpP xjqF/kp7FPc2G39vaAYMfh4JyVzeTc8G1+52x13QbJnc45NIivDnJxYg6D5ixELnCYZW XHDJIAEWTjaHv41iai2AIU/FGUUK+iXG7I9VUsrcTHnB9IpsM7gASSIbM/PexeoaEEwm i4lRE1pIHDSkhDy2tYOiVtEyPDDCvcenlmoR8tygSVeR8eFkVwU4ki8eqHchrFoP93i3 8SNQ== X-Gm-Message-State: AJaThX4nPJwnf0Nn30/HRy0ghLQk+W5MzUHYGkegBcPuoh9AUdG+Cch2 aLZpl3ZgkIbIFBabuo7Xzu177iAW2TlhTizb/A54 X-Google-Smtp-Source: AGs4zMZIQIJNQIaJFzT4/CLxC47W5KFG6zlbqqd16IiNZVKh2Pj0ZWAbaVc776tQVOBrJFNSKHLF19JMdF7I8uE+AQg= X-Received: by 10.55.76.85 with SMTP id z82mr21912066qka.346.1510689731983; Tue, 14 Nov 2017 12:02:11 -0800 (PST) MIME-Version: 1.0 Received: by 10.200.48.92 with HTTP; Tue, 14 Nov 2017 12:02:11 -0800 (PST) In-Reply-To: References: <8521B6E8-C894-4DDB-B467-B6B3457459A6@occamsmachete.com> <37563441-CBEA-4C9B-8718-7D47D8EB6A08@occamsmachete.com> From: Johannes Schulte Date: Tue, 14 Nov 2017 21:02:11 +0100 Message-ID: Subject: Re: "LLR with time" To: user@mahout.apache.org Content-Type: multipart/alternative; boundary="001a114875f8d73eab055df6dcc5" archived-at: Tue, 14 Nov 2017 20:02:27 -0000 --001a114875f8d73eab055df6dcc5 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =E2=9C=93 On Mon, Nov 13, 2017 at 3:32 AM, Ted Dunning wrote: > Regarding overfitting, don't forget dithering. That can be the most > important single step you take in building a good recommender. > > Dithering can be inversely proportional to amount of exposures so far if > you like to give novel items more exposure. > > This doesn't have to be very fancy. I have had very good results by > generating a long list of recommendations, computing a pseudo score based > on rank, adding a bit of noise and resorting. I also scanned down the lis= t > and penalized items that showed insufficient diversity. Then I resorted > again. Typically, the pseudo score was something like exp(-r) where r is > rank. > > The noise scale is adjusted to leave a good proportion of originally > recommended items in the first page. It could have easily been scaled by > 1/sqrt(exposures) to let the newbies move around more. > > The parameters here should be adjusted a bit based on experiments, but a > heuristic first hack works pretty well as a start. > > > > > > On Sun, Nov 12, 2017 at 10:34 PM, Pat Ferrel > wrote: > > > Part of what Ted is talking about can be seen in the carousels on Netfl= ix > > or Amazon. Some are not recommendations like =E2=80=9Ctrending=E2=80=9D= videos, or =E2=80=9Cnew=E2=80=9D > > videos, or =E2=80=9Cprime=E2=80=9D videos (substitute your own promotio= ns here). Nothing > to > > do with recommender created items but presented along with > > recommender-based carousels. They are based on analytics or business > rules > > and ideally have some randomness built in. The reason for this is 1) it > > works by exposing users to items that they would not see in > recommendations > > and 2) it provides data to build the recommender model from. > > > > A recommender cannot work in an app that has no non-recommended items > > displayed or there will be no un-biased data to create recommendations > > from. This would lead to crippling overfitting. Most apps have placemen= ts > > like the ones mentioned above and also have search and browse. However > you > > do it, it must be prominent and aways available. The moral of this > > paragraph is; don=E2=80=99t try to make everything a recommendation, it= will be > > self-defeating. In fact make sure not every video watch comes from a > > recommendation. > > > > Likewise think of placements (reflecting a particular recommender use) = as > > experimentation grounds. Try things like finding a recommended category > and > > then recommending items in that category all based on user behavior. Or > try > > a placement based on a single thing a user watched like =E2=80=9Cbecaus= e you > > watched xyz you might like these=E2=80=9D. Don=E2=80=99t just show the = most popular > > categories for the user and recommend items in them. This would be a ty= pe > > of overfitting too. > > > > I=E2=80=99m sure we have strayed far from your original question but ma= ybe it=E2=80=99s > > covered somewhere in here. > > > > > > On Nov 12, 2017, at 12:11 PM, Johannes Schulte < > johannes.schulte@gmail.com> > > wrote: > > > > I did "second order" recommendations before but more to fight sparsity > and > > find more significant associations in situations with less traffic, so > > recommending categories instead of products. There needs to be some thi= rd > > order sorting / boosting like you mentioned with "new music", or maybe > > popularity or hotness to avoid quasi-random order. For events with > limited > > lifetime it's probably some mixture of spatial distance and freshness. > > > > We will definetely keep an eye on the generation process of data for ne= w > > items. It depends on the domain but in the time of multi channel > promotion > > of videos, shows and products, it's also helps that there is traffic > driven > > from external sources. > > > > Thanks for the detailed hints - now it's time to see what comes out of > > this. > > > > Johannes > > > > On Sun, Nov 12, 2017 at 7:52 AM, Ted Dunning > > wrote: > > > > > Events have the natural good quality that having a cold start means > that > > > you will naturally favor recent interactions simply because there won= 't > > be > > > any old interactions to deal with. > > > > > > Unfortunately, that also means that you will likely be facing serious > > cold > > > start issues all the time. I have used two strategies to deal with co= ld > > > starts, both fairly successfully. > > > > > > *Method 1: Second order recommendation* > > > > > > For novel items with no history, you typically do have some kind of > > > information about the content. For an event, you may know the > performer, > > > the organizer, the venue, possibly something about the content of the > > event > > > as well (especially for a tour event). As such, you can build a > > recommender > > > that recommends this secondary information and then do a search with > > > recommended secondary information to find events. This actually works > > > pretty well, at least for the domains where I have used (music and > > videos). > > > For instance, in music, you can easily recommend a new album based on > the > > > artist (s) and track list. > > > > > > The trick here is to determine when and how to blend in normal > > > recommendations. One way is query blending where you combine the seco= nd > > > order query with a normal recommendation query, but I think that a fa= ir > > bit > > > of experimentation is warranted here. > > > > > > *Method 2: What's new and what's trending* > > > > > > It is always important to provide alternative avenues of information > > > gathering for recommendation. Especially for the user generated video > > case, > > > there was pretty high interest in the "What's new" and "What's hot" > > pages. > > > If you do a decent job of dithering here, you keep reasonably good > > content > > > on the what's new page longer than content that doesn't pull. That > > > maintains interest in the page. Similarly, you can have a bit of a > lower > > > bar for new content to be classified as hot than established content. > > That > > > way you keep the page fresh (because new stuff appears transiently), > but > > > you also have a fair bit of really good stuff as well. If done well, > > these > > > pages will provide enough interactions with new items so that they > don't > > > start entirely cold. You may need to have genre specific or location > > > specific versions of these pages to avoid interesting content being > > > overwhelmed. You might also be able to spot content that has intense > > > interest from a sub-population as opposed to diffuse interest from a > mass > > > population. > > > > > > You can also use novelty and trending boosts for content in the norma= l > > > recommendation engine. I have avoided this in the past because I felt > it > > > was better to have specialized pages for what's new and hot rather th= an > > > because I had data saying it was bad to do. I have put a very weak > > > recommendation effect on the what's hot pages so that people tend to > see > > > trending material that they like. That doesn't help on what's new pag= es > > for > > > obvious reasons unless you use a touch of second order recommendation= . > > > > > > > > > > > > > > > > > > On Sat, Nov 11, 2017 at 11:00 PM, Johannes Schulte < > > > johannes.schulte@gmail.com> wrote: > > > > > >> Well the greece thing was just an example for a thing you don't know > > >> upfront - it could be any of the modeled feature on the cross > > recommender > > >> input side (user segment, country, city, previous buys), some > > > subpopulation > > >> getting active, so the current approach, probably with sampling that > > >> favours newer events, will be the best here. Luckily a sampling > strategy > > > is > > >> a big topic anyway since we're trying to go for the near real time > way - > > >> pat, you talked about it some while ago on this list and i still hav= e > to > > >> look at the flink talk from trevor grant but I'm really eager to > attack > > >> this after years of batch :) > > >> > > >> Thanks for your thoughts, I am happy I can rule something out given > the > > >> domain (poisson llr). Luckily the domain I'm working on is event > > >> recommendations, so there is a natural deterministic item expiry (as > > >> compared to christmas like stuff). > > >> > > >> Again, > > >> thanks! > > >> > > >> > > >> On Sat, Nov 11, 2017 at 7:00 PM, Ted Dunning > > >> wrote: > > >> > > >>> Inline. > > >>> > > >>> On Sat, Nov 11, 2017 at 6:31 PM, Pat Ferrel > > >> wrote: > > >>> > > >>>> If Mahout were to use http://bit.ly/poisson-llr it would tend to > > > favor > > >>>> new events in calculating the LLR score for later use in the > > > threshold > > >>> for > > >>>> whether a co or cross-occurrence iss incorporated in the model. > > >>> > > >>> > > >>> I don't think that this would actually help for most recommendation > > >>> purposes. > > >>> > > >>> It might help to determine that some item or other has broken out o= f > > >>> historical rates. Thus, we might have "hotness" as a detected featu= re > > >> that > > >>> could be used as a boost at recommendation time. We might also have > > > "not > > >>> hotness" as a negative boost feature. > > >>> > > >>> Since we have a pretty good handle on the "other" counts, I don't > think > > >>> that the Poisson test would help much with the cooccurrence stuff > > > itself. > > >>> > > >>> Changing the sampling rule could make a difference to temporality a= nd > > >> would > > >>> be more like what Johannes is asking about. > > >>> > > >>> > > >>>> But it doesn=E2=80=99t relate to popularity as I think Ted is sayi= ng. > > >>>> > > >>>> Are you looking for 1) personal recommendations biased by hotness = in > > >>>> Greece or 2) things hot in Greece? > > >>>> > > >>>> 1) create a secondary indicator for =E2=80=9Cwatched in some local= e=E2=80=9D the > > >> local-id > > >>>> uses a country-code+postal-code maybe but not lat-lon. Something > that > > >>>> includes a good number of people/events. The the query would be > > >> user-id, > > >>>> and user-locale. This would yield personal recs preferred in the > > > user=E2=80=99s > > >>>> locale. Athens-west-side in this case. > > >>>> > > >>> > > >>> And this works in the current regime. Simply add location tags to t= he > > >> user > > >>> histories and do cooccurrence against content. Locations will pop o= ut > > > as > > >>> indicators for some content and not for others. Then when somebody > > >> appears > > >>> in some location, their tags will retrieve localized content. > > >>> > > >>> For localization based on strict geography, say for restaurant > search, > > > we > > >>> can just add business rules based on geo-search. A very large bank > > >> customer > > >>> of ours does that, for instance. > > >>> > > >>> > > >>>> 2) split the data into locales and do the hot calc I mention. The > > > query > > >>>> would have no user-id since it is not personalized but would yield > > > =E2=80=9Chot > > >>> in > > >>>> Greece=E2=80=9D > > >>>> > > >>> > > >>> I think that this is a good approach. > > >>> > > >>> > > >>>> > > >>>> Ted=E2=80=99s =E2=80=9CChristmas video=E2=80=9D tag is what I was = calling a business rule > and > > >> can > > >>>> be added to either of the above techniques. > > >>>> > > >>> > > >>> But the (not) hotness feature might help with automated this. > > >>> > > >>> > > >>> > > >>> > > >>>> > > >>>> On Nov 11, 2017, at 4:01 AM, Ted Dunning > > >> wrote: > > >>>> > > >>>> So ... there are a few different threads here. > > >>>> > > >>>> 1) LLR but with time. Quite possible, but not really what Johannes > is > > >>>> talking about, I think. See http://bit.ly/poisson-llr for a quick > > >>>> discussion. > > >>>> > > >>>> 2) time varying recommendation. As Johannes notes, this can make u= se > > > of > > >>>> windowed counts. The problem is that rarely accessed items should > > >>> probably > > >>>> have longer windows so that we use longer term trends when we have > > > less > > >>>> data. > > >>>> > > >>>> The good news here is that this some part of this is nearly alread= y > > > in > > >>> the > > >>>> code. The trick is that the down-sampling used in the system can b= e > > >>> adapted > > >>>> to favor recent events over older ones. That means that if the > > > meaning > > >> of > > >>>> something changes over time, the system will catch on. Likewise, i= f > > >>>> something appears out of nowhere, it will quickly train up. This > > >> handles > > >>>> the popular in Greece right now problem. > > >>>> > > >>>> But this isn't the whole story of changing recommendations. Anothe= r > > >>> problem > > >>>> that we commonly face is what I call the christmas music issue. Th= e > > >> idea > > >>> is > > >>>> that there are lots of recommendations for music that are highly > > >>> seasonal. > > >>>> Thus, Bing Crosby fans want to hear White Christmas > > >>>> until the day afte= r > > >>>> christmas > > >>>> at which point this becomes a really bad recommendation. To some > > >> degree, > > >>>> this can be partially dealt with by using temporal tags as > > > indicators, > > >>> but > > >>>> that doesn't really allow a recommendation to be completely shut > > > down. > > >>>> > > >>>> The only way that I have seen to deal with this in the past is wit= h > a > > >>>> manually designed kill switch. As much as possible, we would tag t= he > > >>>> obviously seasonal content and then add a filter to kill or > downgrade > > >>> that > > >>>> content the moment it went out of fashion. > > >>>> > > >>>> > > >>>> > > >>>> On Sat, Nov 11, 2017 at 9:43 AM, Johannes Schulte < > > >>>> johannes.schulte@gmail.com> wrote: > > >>>> > > >>>>> Pat, thanks for your help. especially the insights on how you > > > handle > > >>> the > > >>>>> system in production and the tips for multiple acyclic buckets. > > >>>>> Doing the combination signalls when querying sounds okay but as y= ou > > >>> say, > > >>>>> it's always hard to find the right boosts without setting up some > > > ltr > > >>>>> system. If there would be a way to use the hotness when calculati= ng > > >> the > > >>>>> indicators for subpopulations it would be great., especially for = a > > >>> cross > > >>>>> recommender. > > >>>>> > > >>>>> e.g. people in greece _now_ are viewing this show/product whatev= er > > >>>>> > > >>>>> And here the popularity of the recommended item in this > > > subpopulation > > >>>> could > > >>>>> be overrseen when just looking at the overall derivatives of > > >> activity. > > >>>>> > > >>>>> Maybe one could do multiple G-Tests using sliding windows > > >>>>> * itemA&itemB vs population (classic) > > >>>>> * itemA&itemB(t) vs itemA&itemB(t-1) > > >>>>> .. > > >>>>> > > >>>>> and derive multiple indicators per item to be indexed. > > >>>>> > > >>>>> But this all relies on discretizing time into buckets and not > > > looking > > >>> at > > >>>>> the distribution of time between events like in presentation abov= e > > > - > > >>>> maybe > > >>>>> there is something way smarter > > >>>>> > > >>>>> Johannes > > >>>>> > > >>>>> On Sat, Nov 11, 2017 at 2:50 AM, Pat Ferrel > >> > > >>>> wrote: > > >>>>> > > >>>>>> BTW you should take time buckets that are relatively free of dai= ly > > >>>> cycles > > >>>>>> like 3 day, week, or month buckets for =E2=80=9Chot=E2=80=9D. Th= is is to remove > > >>> cyclical > > >>>>>> affects from the frequencies as much as possible since you need = 3 > > >>>> buckets > > >>>>>> to see the change in change, 2 for the change, and 1 for the eve= nt > > >>>>> volume. > > >>>>>> > > >>>>>> > > >>>>>> On Nov 10, 2017, at 4:12 PM, Pat Ferrel > > >>> wrote: > > >>>>>> > > >>>>>> So your idea is to find anomalies in event frequencies to detect > > >> =E2=80=9Chot=E2=80=9D > > >>>>>> items? > > >>>>>> > > >>>>>> Interesting, maybe Ted will chime in. > > >>>>>> > > >>>>>> What I do is take the frequency, first, and second, derivatives = as > > >>>>>> measures of popularity, increasing popularity, and increasingly > > >>>>> increasing > > >>>>>> popularity. Put another way popular, trending, and hot. This is > > >> simple > > >>>> to > > >>>>>> do by taking 1, 2, or 3 time buckets and looking at the number o= f > > >>>> events, > > >>>>>> derivative (difference), and second derivative. Ranking all item= s > > > by > > >>>>> these > > >>>>>> value gives various measures of popularity or its increase. > > >>>>>> > > >>>>>> If your use is in a recommender you can add a ranking field to a= ll > > >>> items > > >>>>>> and query for =E2=80=9Chot=E2=80=9D by using the ranking you cal= culated. > > >>>>>> > > >>>>>> If you want to bias recommendations by hotness, query with user > > >>> history > > >>>>>> and boost by your hot field. I suspect the hot field will tend t= o > > >>>>> overwhelm > > >>>>>> your user history in this case as it would if you used anomalies > > > so > > >>>> you=E2=80=99d > > >>>>>> also have to normalize the hotness to some range closer to the o= ne > > >>>>> created > > >>>>>> by the user history matching score. I haven=E2=80=99t found a ve= y good way > > >> to > > >>>> mix > > >>>>>> these in a model so use hot as a method of backfill if you canno= t > > >>> return > > >>>>>> enough recommendations or in places where you may want to show > > > just > > >>> hot > > >>>>>> items. There are several benefits to this method of using hot to > > >> rank > > >>>> all > > >>>>>> items including the fact that you can apply business rules to th= em > > >>> just > > >>>>> as > > >>>>>> normal recommendations=E2=80=94so you can ask for hot in =E2=80= =9Celectronics=E2=80=9D if > > >> you > > >>>>> know > > >>>>>> categories, or hot "in-stock" items, or ... > > >>>>>> > > >>>>>> Still anomaly detection does sound like an interesting approach. > > >>>>>> > > >>>>>> > > >>>>>> On Nov 10, 2017, at 3:13 PM, Johannes Schulte < > > >>>>> johannes.schulte@gmail.com> > > >>>>>> wrote: > > >>>>>> > > >>>>>> Hi "all", > > >>>>>> > > >>>>>> I am wondering what would be the best way to incorporate event > > > time > > >>>>>> information into the calculation of the G-Test. > > >>>>>> > > >>>>>> There is a claim here > > >>>>>> https://de.slideshare.net/tdunning/finding-changes-in-real-data > > >>>>>> > > >>>>>> saying "Time aware variant of G-Test is possible" > > >>>>>> > > >>>>>> I remember i experimented with exponentially decayed counts some > > >> years > > >>>>> ago > > >>>>>> and this involved changing the counts to doubles, but I suspect > > >> there > > >>> is > > >>>>>> some smarter way. What I don't get is the relation to a data > > >> structure > > >>>>> like > > >>>>>> T-Digest when working with a lot of counts / cells for every > > >>> combination > > >>>>> of > > >>>>>> items. Keeping a t-digest for every combination seems unfeasible= . > > >>>>>> > > >>>>>> How would one incorporate event time into recommendations to > > > detect > > >>>>>> "hotness" of certain relations? Glad if someone has an idea... > > >>>>>> > > >>>>>> Cheers, > > >>>>>> > > >>>>>> Johannes > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>> > > >>>> > > >>>> > > >>> > > >> > > > > > > > > --001a114875f8d73eab055df6dcc5--