Tuesday, September 10, 2013

Thursday, September 12: Semantics

This week we will read about semantics and natural language.  My goal is for you to understand how words combine through syntax to create meaning.  In the following weeks we will study computational approaches, but this week we will focus on linguistic representations.

Read:
Questions:
  • Complete the  following mini-problem set:  http://cs.brown.edu/courses/csci2951-k/psets/2013-09-10-semantics/.  Download the latex file and fill in the entries in the table of word meanings using the notation from Heim and Kratzer.  Email me the tex and pdf file when you are done.  If you have never used latex before, there is more information about it at the Brown CS LaTeX page.
  • Post an answer to the following question on the blog:  How should a robot represent word meanings?  What is good about the Heim and Kratzer approach?  What is missing?


18 comments:

  1. A robot should represent word meanings in the manner that will make it easiest for it to understand the commands issued to it. Describing nouns and verbs as functions, or sets of pairs, is a very static and non-contextual way to understand language. This is especially true for a command you're giving to a robot, since the context -- the facts of the world, the time, the location of the objects in need of manipulation -- are exactly the subject of the conversation.

    The Heim and Kratzer work is for specifying some kind of timeless absolute meaning, but what you want for a robot you're talking to (or for a person you're talking to) is meanings that apply to the world right here and right now. The good thing about their approach is that it's computationally tractable: you can easily imagine it encoded in Java or Lisp. What's less good about it is the difficulty of incorporating context. Once you have a function encoded for the verb "put", how will you automatically change it to accommodate whether we're talking about putting something on a truck or putting something in the empty space (which happens to be on a truck)? Modifications like that, on the fly, seem as if they would be quite challenging to implement on the H&K structure.

    ReplyDelete
  2. A robot should represent word meanings a way that it can easily correlate them to actions and responses in its current world. The Heim and Kratzer approach allows for a quick and concrete assignment of words and sentences to truth-values, which could, theoretically, be translated to actions if the truth-values could be transformed into some form of understanding/actions. Unlike the Winograd approach, however, it is not quite as direct an understanding. The Heim and Kratzer approach is beneficial in that it can easily handle unknown objects entering the world (they would simply evaluate to false under all functions), however it faces the scalability problem similar to the Winograd approach in that the functions are so concrete that new things can only be defined in terms of previously defined things. The Heim and Kratzer approach does provide an excellent means to determining the difference between syntactical errors and errors of supposition, which would definitely be a powerful tool in determining the meaning of commands. The context dependability proposed in Chapter Four would be difficult to implement, but would mirror the power of Winograd's robot's ability to determine meaning of context dependent sentences. The two major components that I see missing are scalability and a good way to turn truth-values of sentences into real world actions.

    ReplyDelete
  3. 1. I'm not sure if I am ready to say how a robot "should" represent word meanings. Based upon arguments put forth by Heim/Kratzer as well as Winograd, it seems that definitions should have some functional aspect (Winograd expressed them in terms of predicates, actions, and results. Heim/Kratzer expressed them in terms of theoretical functions from a space of objects to either truth values, or other functions). This would support a robot's ability to logically process meanings as they apply to specific objects/actions/behaviors in the world (however, its robustness is limited by the same scaling problem we discussed in class with regards to the Winograd approach).

    2. The Heim/Kratzer approach is good because it is mathematically grounded and comparatively few axioms (set/type theory) are stated upon which to build up the method. Computationally, it seems to easily imply an implementation involving some functional, typed language (which supports compound types). Another positive aspect of this approach is the distinction between statements which are uninterpretable (provably incorrect because of the domain of their constituent words/functions. In essence, syntactically wrong) and failures of presupposition (Syntactically correct, but inaccurate due to some presupposition made by the words/functions involved). This seems to parallel (in programming) syntax versus runtime errors.

    3. Firstly, any implementation based upon this method would clearly fall victim to the same sort of scalability issues which plague the Winograd approach. Any word which is undefined must be given a definition (or an extremely robust system of inference must be developed), which means many hours of training or programming very specific concepts. This rapidly becomes computationally taxing. In addition, while this approach seems to have wiggle room for the inclusion of predicates (i.e. grab = lambda x in D_e and x is light enough to carry and I am not currently carrying anything ...) but not for results (side-effects) or global state, such as "once I've grabbed something, I am now ~holding~ it" That being said, this is a book about grammar and (at least in the assigned chapters) the intent never seems to be to develop a scaleable/extensible system for robotic communication.

    ReplyDelete
  4. To reduce the need to program word meanings into robot software, words should be represented in an expandable and modifiable method. As language commands and queries are heavily tied into what a robot can percieve, the robot may not understand the entire picture through words or perception alone. The word meanings though have to be relatable, eventually, to its understanding of the world. Words needs to be mapped to objects and actions. So word meanings need to be formalized in a manner that can be relateable. As a robot's perception is never 100% certain, word meanings may also not be completely certain. Statistical methods need to be used to allow the robot to select the most probable meaning.

    It's assumed in the Fregean program method that word functions map to discrete truth values, in reality there will always be uncertainty with regards to a words meaning in the realm of robotics. The robot will just never understand the entire picture a user is trying to describe. Like Tom says, this system provides a very direct mapping from language to code. As language is formalized into word sets, function evaluations, and truth values an expandable word understanding can be constructed.

    ReplyDelete
  5. How should a robot represent word meanings?
    In case of nouns, if a word represents a person, it is to correlate a person’s visual image, vocal characteristics, and memories or events related to that person. If a word represents an object, the word meaning means its visual appearance, affordances, and also memories related to it.
    In case of verbs, a word can mean a physical or mental act or it can have more sophisticated and metaphysical meanings. If a verb mean a mental act such as ‘liking’, we can program the robot to do nonverbal social actions which can relate to ‘liking’ action such smiling or laughing.
    In case of adjectives, a word can become a selector of an object or a person. In case of adverbs, a word can be fed into a verb and change the behavior or memory related to the act that a verb is representing.
    SHRDLU tried to solve this program by representing word meanings by ‘programs define actions in the system to process these meanings.
    If we are to reason what a phrase or a sentence means, just like Heim and Kratzer’s approach, we can try to know what is true given some circumstances or absolute environment. Using conditional semantics, the true states and their relations to real world entities or mental entities can be used to compute which entities can relate to phrases like ‘a red ball’, ‘a person who visited yesterday’, ‘a thought that I had in mind’.

    What is good about the Heim and Kratzer approach?
    Paradoxically, one of its main advantage is that it is using the language itself to represent the word meanings. You don’t necessarily need an object or an act to attach to word meanings.
    Since it is using conditional logic (functions, which can be represented sets as well) to represent a word, a verb or grammatical component of our language, we can easily use logical deductions to draw truth or calculate truth of a given environment.
    Its interoperability between conditional logic notation and sets also makes the calculation more convenient. For example, this method is mathematically sound in representing how to calculate what ‘red blocks’ are and who ‘Jack’ is in the given environment.

    What is missing?
    We still need to hand code meanings. This can become a problem when we want expand the knowledge base of a robot. This notation also doesn’t relate to ‘physical world’. In case of a robot, a word ‘hit’ is not correlated to an actual physical act ‘hit’, yet. Unlike the SHRDLU’s case, we need a second link to connect real world actions to word meanings.
    it is unclear how to solve ambiguities between words. How can we handle synonyms with slight differences? It may be easy if synonyms has a n-to-1 mapping and only represent one ground meaning. In most cases, it is likely that synonyms has n to n mapping to grounded meanings.

    ReplyDelete
  6. A robot should represent word meanings in a form, which a robot can process and understand. Using word representation, a robot should be able to connect a word to what the word refers to.

    Heim and Kratzer’s approach maps a sentence to a truth-value. A truth-value, 0 or 1, is a nice input value to a robot. A robot can certainly handle a truth-value even though it doesn’t understand what the original English sentence means. With this approach, a robot would be able to “understand” its environments.

    I am not sure how their approach can be used to instruct a robot to do something. An English command would translate to either 0 or 1. There are only two possible input commands to a robot even though the infinite number of English commands are possible. Their approach loses a substantial amount of information when it converts a sentence to a truth-value.

    ReplyDelete

  7. 1) Ultimately, it may depend on the domain in which the robot exists/is designed for. If the goal is for an robotic agent to achieve general, genuine language understanding, then it seems that we will need quite a bit more than is offered by Heinz/Kratzer – specifically, the agent would need to couple the formal functional application system of semantic composition (based rigidly on Frege's Conjecture: “semantic composition as functional application” - p. 13”) with information and reasoning about (1) the agent(s) that the robot is speaking with (or at least the ability to make useful inferences about novel speakers' language models), (2) the context or environment in which the robot exists (once again – this could be accomplished with an inferencing system that could construct a rough model of the environment instead), and (3) a sense of the pragmatics, implicature, and other meta-features about sound, objects, and language. This is only if the ultimate goal is to enable the robotic agent to reason and understand/generate language about novel situations, objects, and events. One possibility is that robots generate word meanings based on particular “language games” (see: Wittgenstein) they are participating in, preying heavily on the notion that “language is use” (Wittgenstein's Tractatus).

    2/3) The system described is effective at modeling the truth-conditions for a subset of possible utterances, and furthermore, provides a formal mechanism for computing these truth-conditions that is relatively easy to compute in virtue of its similarity to the lambda calculus. The small number of rules (only seven) also makes it appealing, as it is a simple system that is capable of modeling a huge domain.

    However, it is not straight-forward that all utterances may be reduced to truth-conditions (consider: (1) questions, (2) ambiguous sentences, (3) sentences without possible truth values, like the Liar Paradox or nonsensical statements, (4) Fragments, onomatopoeia, and other utterances that convey meaning but are without truth-values, etc.). In order to accomplish this within the Heim/Kratzer/Frege framework, we would need to add a lot of semantic meat regarding what it words mean (e.g. "smokes"), and how they relate to the world and other actions/objects within it – this might be possible, and is likely representable in terms of further functions that make use of the “Smokes” function. Ultimately, though, this highlights a fundamental weakness of the system for handling novel scenarios.

    Furthermore, little is done to deal with actual human language, as it is rare that the entire meaning of a human utterance is contained in the sentence spoken – for instance, on page 74, the authors mention the case of the sentence “the stairway in South college”, when in fact there are several stairways in South College. They proceed by arguing that the utterance ought to have “no semantic value”, since the input domain of the function of “the” is restricted to singular instances (since it cannot be resolved, otherwise), and thus the sentence is semantically null. This seems odd, though, when contrasted with the mechanism that Winograd's system used to determine which possible stairway in South College the utterance might be referring to (i.e. using contextual clues or additional ad-hoc reasoning to isolate a semantic model of “the stairway” when multiple stairways exist). Contextual information and pragmatics ought to be considered, too. For instance, one ought to consider the pragmatics of language. In H. P. Grice's paper “Logic and Conversation”, he suggests that there are some additional maxims that humans make use of in order to make language an effective medium for communication. When considering possible models for language meaning as applied to robotics and AI, one might consider trying to incorporating some Gricean Maxims to more accurately capture the way language works.

    ReplyDelete
  8. 1.The robot should represent word meaning based on the internal language it uses to represent the world. If there is no formal language inside robot to represent the world model, natural language directly interacts with robot's executor, an abstract component that manages the execution of a robot. So Winograd's approach which takes each part of the sentence as a procedure program and Heim and Kratzer's approach which pairs sentences with their truth-condtion can both be easily converted to actions and facts in the executor. But if a formal language employed in a robot to represent the world, natural language can be translated into this formal language to represent and reason.
    And another problem is how can the robot relate the entities in the real world to the word meaning. Robot knows the physical information about the world and it should relate these information with the word meaning. The robot mental world should also be related to the word meaning. For example, the robot should know the manipulation ability relates to the word "put", "place" or "pick up" in different condition.
    2. Heim and Kratzer's approach pairs sentences with truth-conditions. It treats "semantic composition as functional application" which uses conditional logic. It is mathematically soundness and completeness. With the fixed semantic rules, it can easily introduce new words and sentences.
    3. The biggest problem of Heim and Kratzer's approach is its ability of expression. Only by handcrafted semantic rules can expand the ability to express. In order to model the world, it is obviously not sufficient for robot to use this approach to understand word meaning.

    ReplyDelete
  9. (1)As far as I am concerned, a robot should represents the meaning of a noun with some images or features that can be used to identify certain objects. And for verbs, a robot should presents them with instructions that need to be performed and notifications when conducting them.

    (2)One good thing about this approach is that the use of function and true-false values to represent real meaning of words. In nature, robot has a better ability to process functions and 0-1 values faster. It can make the compute process more efficient.
    Moreover, using the true-false values it computed using the defined functions, it can easily understand and make good judgement about the current environment.

    (3)I think the scalability problem is inevitable for the approaches using the predefined rules, dictionary or functions. New functions and semantic need to be defined and added manually to scale this approach.

    ReplyDelete
  10. How should a robot represent word meanings?
    There should be a standard representation for a robot that represents word meanings in a specific way (e.g, using sets and functions in mathematical way to take all the denotations of sentences to be truth-values) to help the robot systematically and better understand the word meanings. With that standard representation, the robot is easy to expand its knowledge to all words based on some existing reactions/behaviors of partial words. The robot should also be able to learn/know how to behave correctly for verbs(how ‘smokes’ acts like, and for both physical and mental behaviors) and identify objects for nouns(how a ‘pyramid’ looks like) by understanding word meaning and correlating to the environment in the real world.

    What is good about the Heim and Kratzer approach? What is missing?
    This approach formulates a set of semantic rules that provides denotations for all trees and subtrees as sentences are represented as phrase structure trees. It pairs sentences with their true-conditions, which helps the robot easily understand the word meanings in a standard way and execute commands in the real world(do it if it is true, otherwise not). Given some simple and specific training ways for some words, its scalability allows the robot easily expands the partial training knowledge to all the words(kind of easy-learning).

    However, it seems this approach is missing the way to correlate word meanings to the real behaviors in the real world(e.g, physical and metal behaviors, how to represent ‘pick up’, how to represent ‘like’) though they are represented as truth-values. Moreover, it is difficult for the robot to resolve ambiguities. Given a specific word, how to represent all its possible meanings and understand which meaning the command is asking? (e.g, is the word ‘love’ a noun or a verb in some cases?)

    ReplyDelete
  11. A robot should represent word meanings in a manner that a word is easily related to the concept that the word denotes. Also, the representation should somehow enable the robot to link a word to other words that refer to correlated concepts. For example, when processing "grasp", the robot should be able to recognize the verb's possible subjects and objects.

    The good about the Heim and Kratzer approach is that it provides a formal method to determine the truth-condition of a sentence, which should make it easier for the robot to answer questions about the "state" of the world, e.g., "what does the box contain" or "is there a large block behind a pyramid", etc. In addition, as mentioned in above discussions, the H&K approach is implementable in programming language.

    What I found missing in the H&K approach is what enables the robot to execute commands, i.e., the ability for planning. In order to finish a task indicated in a command, the robot should be aware of the preconditions and effects of actions (verbs), which I didn't find in H&K approach. Also, the input of H&K approach is sentences with parsing trees, while the input of a robot is usually just the sentences. Therefore, to use H&K approach in robots, the parsing process need to be added.

    ReplyDelete

  12. Alright, I have to say: I felt the Heim and Kratzer reading was grossly incorrect. They get a few things right, and in my opinion, a _lot_ wrong. They try to use proofs and classical mathematical functions on language, which seems entirely inappropriate to the task at hand. They start off with what I'd take to be a correct assertion, that knowing the meaning of a sentence is equivalent to knowing its truth-conditionals (assume we restrict sentences to only factual statements). Indeed, this is equivalent to the predicate definitions of statements that was very effective within their domain in the Winograd thesis. While I found this a worthwhile and important observation, the arguments go dramatically downhill after that point.

    One major failure is mentioned in their discussion of the phrase 'Ann laughed Jan' is un-parsable, un-grammatical, and therefore contains no meaning. In fact, the sentence certainly gives a native English speaker some pause, but there are a number of entirely plausible interpretations. Perhaps the most salient is that 'Ann made Jan laugh.' One can especially imagine such errors in the context of communicating with non-native English speakers. Depending on their level of English proficiency, such errors can be common, and yet a native English speaker can have an entirely normal conversation with the non-native speaker after this initial adjustment of the native speakers internal grammatical expectations. In a similar fashion, their definition for 'the' similarly make little sense. Compare ''The dog needs food.' 'A dog needs food', and 'Dog needs food'. We can tell a distinction for 'A dog' - it makes the statement a general observation. The dog refers to a specific dog, and the 3rd is somewhat ambiguous, but not without meaning.

    One last criticism: at least in the sections that we read, they make no attempt to actually define the meanings of words, let alone discuss that definition. They simple claim the meaning is, e.g. 'Ann smokes' iff Ann smokes. This tautology achieves nothing in helping define words. In my opinion, what is missing is a much rougher notion of word-association. Take the sentence, 'John chokes'. (I grant not the greatest example.) While it could mean that John is choking on a food-item, there's a certain probability it also means that John chokes other people (, and the sentence is missing a DO). Perhaps this issue is resolved with statistical parsing though. Take a phrase such as 'John dog'. It certainly makes no sense - there's no verb, but it conveys some amount of meaning, and it leaves the reader guessing the intended truth-conditionals. Maybe John owns a dog, perhaps he wants to see his dog, and so on. In short, there should be no concept of an 'invalid parse'. There can certainly be confusion, and plenty of it, but even a very confusing sentence (fragment) conveys a slight amount of information.


    Referring to the question of how a robot should represent word meanings, clearly predicates are fairly useful. There also has to be some notion of word associations. If I say something is Salmon-colored, it is also to varying-degrees pink-, red-, and orange-colored. Furthermore, it might bring salmon the fish to your mind. These types of word-association chains I feel are integral to truly understanding word meanings. There should also be a kind of image component: the word 'salmon' should recall images of salmon. It should also have a word-associations with swimming, dinner, salty, and a host of other concepts.

    ReplyDelete
  13. Robots should represent word meanings in a way that is effective but adaptable. A robot, like a human being, should be prepared to learn about its world in order to be most effective at knowing accurate word meanings. It should not require a large amount of human constructed input because it is a herculean if not impossible task to attempt to manually encode any reasonable amount of information about the meaning of things in the world at large. These requirements would suggest a kind of statistical method, because statistical parameters can be tuned over time with new training data just as human beings learn as they observe new things. Additionally, there is some evidence from cognitive science research that human language understanding is statistical in nature.

    Meanings should be grounded somehow in observation and reality, just as it is for humans. Any robot that is able to act in the world must have some sensing systems and therefore its understanding of words must be connected with the observations it has made in the world. One way to do this might involve training with “prototypes” of particular objects and actions and have some method of comparing new things to prototypes it knows of. This is one popular theory of how human understanding works. In a prototype model, a robot upon encountering a new animal might ask a human associate about what other animals are similar in order to fill out it's trees of prototypes and related animals. Like a canary might be a prototypical bird high up in a tree of related examples while an emu would be more hidden lower down in this hierarchy.

    The Heim and Kratzer approach is nice in that it is presented in a mathematically rigorous manner, which makes it simple to reason about. It is also a naturally computational model because it is described in terms of lambda functions.

    The rigidity is also a weakness. It is not clear if a formally constructed system like this actually corresponds well to how human beings understand language (which we would like to believe would serve as a good model for an artificially intelligent robot). It's also unclear how this system would help with perceiving robot. It seems well constructed for formally explaining linguistic structures, but associating linguistic structure with real world objects and actions is a very different problem. It's also unclear how a system like this could be constructed without requiring a great deal of initial human input which as stated above is unwieldy and unlikely to offer the flexibility we would want out of an actually intelligent system. Also it's unclear how well it would adapt to less than ideally constructed English like ungrammatical language or idioms that don't follow normal structures.

    ReplyDelete
  14. Robots should represent word meanings in a way that allows them to deconstruct a sentence they have never seen before and understand its meaning. This is an extremely ambiguous and abstract definition for word meaning and perhaps it would be more correct to call this a requisite. However, the goal of representing word meaning is an extremely broad one and I believe it is best to start by building requisites than to implement a definition. Indeed, this goal of representing word meaning may also be context dependent.

    For example, if our goal is to create a robot similar to SHRDLU it could suffice to manually code a table of word meanings so each word refers to a series of properties as done by Winograd. On the other hand, if our goal is to create a robot that can simulate human cognitive capabilities we run into a multitude of other problems. One interesting challenge that comes to mind is, do all humans represent words in the same manner. This is not a likely case, for if you ask two humans to describe the word "red", you will probably end up with very different definitions.

    The Heim and Kratzer approach is interesting in that it seems apt to taking a parse tree and generating a binary response. This is a very important property given the domain we are discussing. Typically a robot will take a sentence, generate a grammatical parse tree for that sentence, that parse tree is then interpreted in some way, outputting a signal that the robot can act upon. The fact that Heim and Kratzer's approach fits very well in this framework makes it seem naturally suited to our domain.

    On the other hand, their approach does seem to be missing some elements. First of all, what happens when we step away from the world of binary signals. What happens if we give the robot a command, instead of asking it to evaluate truth-value sentence? In that case, the output of their framework could be a function or series of actions. I addition, their approach seems to require a pre-defined set of denotations. This would be fine for a robot in a limited environment, but as the environment grows in complexity this list of denotations will also become more and more complex. In addition a denotations may be ambiguous and need information beyond its argument to be resolved. This could be something as simple as requiring an additional word in the sentence, but it could also be as complex as implicit information about the agent it is dealing with. Finally, there is the issue of learning. Heim and Kratzer don't indicate any way of adding denotations to their metalanguage. The ability to do so is of upmost importance, given the impossibility of encoding denotations as the complexity of the environment grows.

    ReplyDelete
  15. A robot needs to represent word meanings in connotations. It has to extract the current meaning of the word from the set of statements it has been provided with. Representing them is easier when we already have a language to represent them in. Robots have no internal language to translate to. SHRDLU seemed to fill up tables of object properties to represent its environment/ represent words. Hein and Kartzer calculate truisms of sentences, which is what any robot would need to do, but representing the meaning of each word is still missing. In their text "smokes" is just an word, no meaning what so ever is associated to it, just that it is a verb and can "act" as a function. I would assume that word representations would have to deal with associations with features, image, texture or action based, and store them in some sort of a table. As people have already mentioned learning them would require statistical approaches.
    The Hein and Kartzer approachin my opinion like the SHRDLU is not a system for "understanding" word meanings. Both methods represent word meanings, in the case of SHRDLU it was sort of tabular representation the Hein and Kartzer approach is more mathematically rigorous. This would be an advantage for application to robots because simple binary truisms can be dealt with by a computer.
    The problem with such a binary truism is that it makes complex representations difficult. Longer and nonsensical sentences would have unreasonably complex representations. The Hein and Kartzer approach deals with only the domination of a word, which is something that SHRDLU was able to deal with. It is unclear in this method, how one goes about representing the domination or any word meaning to the system in question.

    ReplyDelete
  16. I think it's important to have a representation which is flexible and powerful, since likely one of the hardest parts of representing a word is determining which of many meanings it has (because of references, disambiguation in context, etc.) A procedure certainly captures nearly as much complexity as you can have.

    Heim & Kratzer propose a system that does an especially good job of defining
    how words are composed together, through parameterized functions and their
    partial application. I'm not sure how well it will extend to more complex
    words which can potentially take a variable number of parameters in potentially
    several different ways.

    It does not especially satisfy me w/r/t the meanings of individual words. It
    seems to wrap up words in symbols, but it doesn't seem like that provides a lot
    of guidance to their meaning or rather, their implementation. They would
    probably argue that implementing words is out of scope for them, as it's
    domain-specific to a large degree. But I'm interested in seeing how to map
    percepts to symbols as well as how to map symbols to more complex meaning.

    ReplyDelete
  17. I think that robots need to be able to connected word meanings with any world models that the robots may have. In order to reason about what data they may have, they need a representation which involves associations between data. Simply associations between words and their corresponding data would not be sufficient.
    On one level, the Heim and Kratzer approach allows functions to be defined independently, and in a modular fashion. Robotics may potentially involve physical movement in these verb functions, tying together language processing and interactive actions. Because functions are used, responses may be flexible. If a robot has data about the world, it is able to logically determine a response. If a data point doesn’t exist, then the robot may employ other means to acquire the desired data point.
    However, because functions are defined so independently of one another, words which should be associated with one another are not. Their approach offers no tools for such a task. It only allows associations of words with data, and not data with data. Questions which do not involve true or false answers make this deficiency evident. For instance, if you were to ask a robot “what color is this block?” then it should not have to find a list of colors and cycle through them all in search of a function which yields 1.

    ReplyDelete
  18. A robot should represent word meaning differently depending on its function. If it is designed to follow commands, then it should represent words in terms of what it can and should do with certain objects and what verbs/commands mean in functional action words. If its purpose is to communicate and hold an intellectual conversation, then it needs a much more abstract representation of meaning. The Heim and Kratzer approach is attempting to represent how humans actually represent words underlyingly. The benefit of this is that theoretically creating robots that think in the same way as humans would be creating realistic artificial intelligence. However, in practice I don’t believe that is true or even the goal. This summer I worked on automated scoring of essays. I often had to explain to people that my goal was not to have the computer understand and grade the essay like a human would, but instead to output similar scores as a human would. The Heim and Kratzer approach is more philosophical and is missing a direct connection to actions that would allow a robot to mimic human thought and behavior.

    ReplyDelete