App DevelopmentStrategy and Innovation

Identify Language and Suggest Replies in your app with MLKit

Natural language processing (NLP) tasks are rather complex by nature. Luckily for those who do not have a lot of experience building and training their own machine learning models, MLKit makes this step unnecessary by providing two new base APIs recently announced by Google, Smart Reply and Language Identification. Implementing this functionality in your Android and iOS apps is really easy!

Language Identification: Overview

ML Kit recognizes text in 110 different languages and typically only requires a few words to make an accurate determination. The model can return the string’s most likely language or get confidence scores for several possible languages. You can find a full list of supported languages (and their BCP-47 codes) here.

Zhang et al (2018) described a model for the classification of codemixed text (CMX) that forms the background of Google’s approach to language ID. Critically, the authors assume that text may either be monolingual, intra-mixed (with one transition between languages in a sentence), or inter-mixed (with many transitions between languages). Furthermore, the possibilities are reduced to dealing with language pairs, as this is the most common setting for codemixed text in live human contexts. The authors employed a scalable feed-forward network with a globally constrained decoder; in November 2018, upon publication, they announced dealing with 100 languages and 100 specific language pairs.

Codemixed datasets are notoriously hard to gather, given that they require word-by-word labels and multilingual speakers in the specific pair at hand. The authors developed data augmentation techniques to expand smaller tagged datasets into larger ones by exploiting the similar structure between examples of codemixed text. For instance, in a translation context, you might see the following two sentences:

“How do you say ‘jump’ in German?”

“How do you say ‘see Spot run’ in German?”

This suggests different kinds of substitutions that could improve the overall robustness of the dataset, for instance, substituting any English term or phrase into the quotes.

Lately, NLP has been following an explosion similar to computer vision’s takeoff enabled by convolutional neural networks (CNNs). Interestingly enough, the step to codemixed classification is similar to the step between whole-image classification and semantic segmentation (classifying each pixel in an image as being part of a class-instantiating object).

Smart Reply: Overview

The Smart Reply model detects the language of the conversation and provides up to three suggested responses. The number of responses depends on how many meet a sufficient level of confidence based on the input. The model uses up to 10 of the most recent messages from a conversation history to generate reply suggestions. It can also suggest an emoji as a possible reply. Smart Reply suggestions are based on the full context of a conversation, not just one message. Also, it’s worth noting that MLKit is taking advantage of on-device machine learning. This allows to generate replies quickly, with user data remaining strictly on-device and not being passed to a server for processing.

However, there are some limitations that you need to be aware of as well.First, Smart Reply is “intended for casual conversations in consumer apps” and “suggestions might not be appropriate for other contexts or audiences,” though the model is designed to filter out sensitive topics if such a topic is detected. Second, English is the only language supported by Smart Reply at this time. The model is capable of detecting whether a different language is being used. In that case, replies will not be suggested to the user.

Finally, the model might not return any predictions at all if it isn’t confident in the relevance of the suggested replies, so there is no guarantee that the user will receive Smart Reply suggestions at all times. This inconsistency creates a difficulty in testing this feature. Nevertheless, this machine learning model is being used across a variety of Google products, such as Gmail, Allo, and Wear OS (previously called Android Wear). The latter also allows for Smart Reply to be used in third-party messaging apps.

Google’s foray into Smart Reply (Kannan et al 2016) kicked off with the insight that roughly 25% of email replies consist of 20 tokens or less. Whereas predicting replies of any size would be intractable, 20 tokens is nice and finite—not to mention that 25% of a system Gmail’s size is massive, giving Google a wealth of email reply pairs for model training.

Smart Reply goes beyond simply picking the most probable replies. It attempts to build extra utility for the user into the system directly by favoring diversity, appropriateness of reply, and the quality of the reply. For instance, “yes” and “no” are both frequent responses to, “Do you want to go to the park Saturday?” Still, even if “yes,” “yep,” and “uh-huh” are the most probable replies, it’s of lesser utility to offer three assents with no option to dissent, since that’s still a common option. Thus, the model is balanced in favor of user utility to prefer giving both positive and negative reply options where either is appropriate. Similarly, it biases reply selection for “quality”: nobody wants to send “ya thx” to their boss, or in a sensitive context. Since only breaches of formality carry this sort of danger, the quality of the response is controlled. It deals analogously with appropriateness to avoid social risks. While this quality measure is used for Gmail’s Smart Reply, Smart Reply in more casual contexts may have less of an emphasis on quality (with more relative emphasis on offensiveness, for instance). The authors describe a long short-term memory (LSTM) model which is more expensive than typical neural networks, even – to be sure the model will scale properly to the size of Google’s system, they have employed a “trigger model”, which is a cheaper DNN system which decides when and where the beefier LSTM is to be used. This trigger model technique has been used across a variety of Google products, including hotword detection on Google Home.

Now that you have some background on Smart Reply and Language Identification, here is how to get these systems set up on Android and iOS.

Language Identification on Android

  1. Set up Firebase in your project.

  2. Add ML Kit dependencies to your project.

    dependencies {
      implementation ''
      implementation '    model:18.0.2'
  3. In your app-level build.gradle file, disable compression of tflite files.

    android {
        // ...
        aaptOptions {
            noCompress "tflite"
  4. Pass the string to the identifyLanguage() method of an instance of FirebaseLanguageIdentification. This method returns a language code returned in BCP-47 format. In case the model was unable to identify the language, it will output the code und (undetermined) as a result.

    val languageIdentification =
        .identifyLanguage("¿Cómo estás?")
        .addOnSuccessListener { identifiedLanguage ->
            Log.i(TAG, "Identified language: $identifiedLanguage")
        .addOnFailureListener { e ->
            Log.e(TAG, "Language identification error", e)
  5. Set a confidence threshold (optional):

                    new FirebaseLanguageIdentificationOptions.Builder()
  6. If you need to get a prediction for the text’s most likely languages, pass the string in question to the identifyAllLanguages() method of an instance of FirebaseLanguageIdentification.

  7. You can also change the confidence threshold by passing a FirebaseLanguageIdentificationOptions object to getLanguageIdentification():

    FirebaseLanguageIdentification languageIdentifier = FirebaseNaturalLanguage
                    new FirebaseLanguageIdentificationOptions.Builder()

    In case the model was unable to find the languages that satisfy this threshold, it will output the code und (undetermined) as a result.

Language Identification on iOS

  1. Set up Firebase in your project.

  2. Include the ML Kit libraries in your Podfile (Note—this step will be different if you are not using Cocoapods):

    pod 'Firebase/Core'
    pod 'Firebase/MLNaturalLanguage'
    pod 'Firebase/MLNLLanguageID'
  3. In your app, import Firebase: import Firebase

  4. Call identifyLanguage() method on an instance of languageIdentification(). This method returns a language code returned in BCP-47 format. In case the model was unable to identify the language, it will output the code und (undetermined) as a result.

    let languageId = NaturalLanguage.naturalLanguage().languageIdentification()
    languageId.identifyLanguage(for: text) { (languageCode, error) in
      if let error = error {
        print("Failed with error: \(error)")
      if let languageCode = languageCode, languageCode != "und" {
        print("Identified Language: \(languageCode)")
      } else {
        print("No language was identified")
  5. If you need to get a prediction for the text’s most likely languages, pass the string in question to the identifyPossibleLanguages() method.

  6. Set the confidence threshold (optional):

    let options = LanguageIdentificationOptions(confidenceThreshold: 0.5)
    let languageId = NaturalLanguage.naturalLanguage().languageIdentification(options:     options)
  7. You can change the confidence threshold by passing a FirebaseLanguageIdentificationOptions object to languageIdentification():

      print("Identified Languages:\n" + {
          String(format: "(%@, %.2f)", $0.languageCode, $0.confidence)
          }.joined(separator: "\n"))
    let options = LanguageIdentificationOptions(confidenceThreshold: 0.4)
    let languageId = NaturalLanguage.naturalLanguage().languageIdentification(options: options)

Smart Reply on Android

  1. Set up Firebase in your project.
  2. In your app-level build.gradle file, add these dependencies:
    dependencies {
      implementation ''
      implementation '    model:18.0.0'
  3. Disable compression of tflite files.
    android {
        aaptOptions {
            noCompress "tflite"
  4. Create a List with conversation history consisting of FirebaseTextMessage objects, with the earliest timestamp first.
            "heading out now", System.currentTimeMillis()))
            "Are you coming back soon?", System.currentTimeMillis(), userId))

Whenever the user receives a message, add the message, its timestamp, and the sender’s user ID to the conversation history. The user ID can be any string that uniquely identifies the sender within the conversation.

Note: Ensure that the most recent message in the conversation log that is being passed to MLKit is from a non-local user.

To generate smart replies to a message, get an instance of FirebaseSmartReply and pass the conversation history to its suggestReplies() method:

val smartReply = FirebaseNaturalLanguage.getInstance().smartReply
        .addOnSuccessListener { result ->
            if (result.getStatus() == SmartReplySuggestionResult.STATUS_NOT_SUPPORTED_LANGUAGE) {
                // Handle the case of unsupported language
            } else if (result.getStatus() == SmartReplySuggestionResult.STATUS_SUCCESS) {
                // Handle success
        .addOnFailureListener {
            // Handle exception

If the operation succeeds, a SmartReplySuggestionResult object is passed to the success handler. This object contains a list of up to 3 suggested replies, which you can present to your user.

for (suggestion in result.suggestions) {
    val replyText = suggestion.text

Smart Reply on iOS

  1. Set up a Firebase Account.

  2. Include the ML Kit libraries in your Podfile:

    pod 'Firebase/Core'
    pod 'Firebase/MLCommon'
    pod 'Firebase/MLNLSmartReply'
  3. In your app, import Firebase:

    import Firebase
  4. Create a conversation consisting of TextMessage objects:

    var conversation: [TextMessage] = []
    let message = TextMessage(
        text: "How are you?",
        timestamp: Date().timeIntervalSince1970,
        userID: "userId",
        isLocalUser: false)
  5. Generate smart replies for the conversation:

    let naturalLanguage = NaturalLanguage.naturalLanguage()
    naturalLanguage.smartReply().suggestReplies(for: conversation) { result, error in
        guard error == nil, let result = result else {
        if (result.status == .notSupportedLanguage) {
            // The conversation's language isn't supported, so the
            // the result doesn't contain any suggestions.
        } else if (result.status == .success) {
            // Successfully suggested smart replies.
            // ...
    for suggestion in result.suggestions {
      // Pass on the suggestion results to the user
References and code examples:

Join our team to work with Fortune 500 companies in solving real-world product strategy, design, and technical problems.

Find Your Role

Moving from Monolith to Microservices Architecture

When a client decides to move from a monolith platform to microservice architecture,...

Read the article