This paper presents an extension to coach end-to-end Context-Conscious Transformer Transducer ( CATT ) fashions by utilizing a easy, but environment friendly technique of mining exhausting unfavourable phrases from the latent area of the context encoder. Throughout coaching, given a reference question, we mine quite a few comparable phrases utilizing approximate nearest neighbour search. These sampled phrases are then used as unfavourable examples within the context listing alongside random and floor fact contextual data. By together with approximate nearest neighbour phrases (ANN-P) within the context listing, we encourage the discovered illustration to disambiguate between comparable, however not equivalent, biasing phrases. This improves biasing accuracy when there are a number of comparable phrases within the biasing stock. We feature out experiments in a large-scale knowledge regime acquiring as much as 7% relative phrase error charge reductions for the contextual portion of take a look at knowledge. We additionally lengthen and consider CATT method in streaming functions.