1.1.
ODR ^
1.2.
Focus on Procedural Fairness ^
(See Giacalone’s dissertation, 2016.) Meanwhile, a meritocratic claim to justified inequality can occur in both the proportion of outcome, and the asymmetries of procedure. There are constraints as well as biases: We know from Rawlsian welfare economics that there may be a floor, from recent wealth-concentration concerns that there may be a ceiling. Professional sports rules for referees show us that there is a limit to the amount and timing of stochastics. Part of the justification of these procedures is that outputs are constructed upon participants’ inputs.
1.3.
Focus on Describing Outcomes ^
1.4.
New Objects of Value ^
2.
Mathematical Representation ^
A set of transition probabilities link the milestones, pij, but the trajectory is a path, not a tree, so this is not a Markov model or other stochastic state-transition model. In the terminology of Herbert Simon, it might be an «aspiration,» or what AI might now call a planned path.
We do not suppose that all k attributes carrying value can be so reduced. Chances mentioned in milestone descriptions are different from transition probabilities that provide guarantees of the connectedness of the trajectory. Remember that pij might often be simply «meets the standard required by this court.» Arguments for and against a trajectory take several forms:
(1) proposing a trajectory; (2) deriving parts of a description of a milestone; (3) adding to the considerations that describe a milestone; (4) adding to the hazards that might strengthen or weaken a transition probability; (5) giving a probability argument for a transition probability based on statistics, precedent, or statute; (6) arguing that the trajectory, taken as a whole, meets a fiduciary standard, a fair division, or an improvement over BATNA (best alternative to a negotiated agreement).
Arguments of types 1–3 are familiar to dynamic planning in AI (though unfamiliar to decision and game theory). Arguments of type 4 are inherited directly from risk analysis in reliability and safety engineering, as well as policy planning in management. Arguments of type 5 are familiar to certain kinds of non-Bayesians who construct probabilities directly from data, and even to some Bayesians and objectivist probabilists who can conceive of conflicting evidence. Type 5 arguments to a standard of proof, and type 6 arguments, are familiar to those who study case-based reasoning. Type 6 arguments could also yield to machine learning, especially for ADR/ODR. This is because typical settlement is just as meaningful in ADR as justified settlement. Machine learning projections can also carry normative force when the training examples are exemplary and the features are connected to principles. Another way that type 6 arguments can be made is through the construction of preference, or utility arguments.
3.1.
One-Shot Tug of War ^
3.2.
One-Shot Exchange to Two Shot ^
An even better way to represent the same situation for the purposes of ODR would be to name the objects in the exchange: a1(m1) = semi-antique-Tabriz-3x5-area-rug; a2(m1) = 100€ because by doing so, it becomes easier to create variations of the proposal:
a1(m1) = semi-antique-Tabriz-3x5-area-rug; a2(m1) = 50€-and-promissory-note
followed by a second milestone:
a1(m2) = semi-antique-Tabriz-3x5-area-rug a2(m2) = 50€-prior a2(m2) = 50€-at-this-later-time where there remains room to put a specific calendar timing on the second payment (and possible documentation of fulfilment of the promissory note).
3.3.
Child Custody Example ^
A most important application of ODR is negotiation of child custody in family court. The tug-of-war typically occurs in the time-spent-with-child or present-at-holidays dimension, though support payment levels may also be subject to barter.
a1(m10) = roughly-equal-visitation-by-day; a2(m10) = roughly-equal-presence-at-holidays
Regardless of how the path sets out, there are clear hazards. A small event, such as a non-electively missed holiday, may be corrected with joint commitment to a response policy of trading the next holiday. This is a non-specific hazard, so there is a question of whether the analysis permits hypothetical events with responses that generalize from the specific to non-specific. One way to do the analysis is to create a milestone for the end of each year and give a specific anchored-by-year response commitment for each anchored-by-year hazard.
Event = cannot-do-holiday-in-year-3;
Response = swap-first-option-holiday-in-year-4.
A much larger hazard is also hypothetical:
Event = parent-job-is-transferred-out-of-state; Response = transferred-parent-pays-plane-fares.
3.4.
Actual ODR Cases ^
Giacalone’s recent dissertation on ODR looked at 300 cases of child custody. In his opinion, over 200 would have benefitted from the more complex representation described here, primarily because of the assignment of non-comparable items in family disputes. Cases turned on such things as
a) the approval of a separation agreement in time; b) the termination of a joint tenancy; c) the judicial separation; d) the division of multiple assets held as property; e) the judicial separation and restitution of defined goods; f) the division of defined goods and adequate compensation for indivisibles; g) the switch from a judicial separation to the approval of the separation agreement; h) the custody of a child.
4.1.
Fiduciary Standards ^
4.2.
Traditional Negotiation Ideas ^
4.3.
Returning Attention to ODR Systems ^
5.
References ^
James Allen/Nate Blaylock/George Ferguson. «A problem-solving model for collaborative agents.» Proc. 1st International Joint Conf. Au. Agents and Multiagent Systs.: part 2, 2002.
Leila Amgoud/Jean-François Bonnefon/Henri Prade. «An argumentation-based approach to multiple criteria decision.» Eur. Conf. Symbolic and Quantitative Approaches to Reasoning and Uncertainty. Springer, 2005.
Michal Araszkiewicz/Agata Lopatkiewicz/Adam Zienkiewicz. «The role of new information technologies in alternative resolution of divorce disputes.» Eur. Sci. J. 2014.
Katie Atkinson/Trevor Bench-Capon/Sanjay Modgil. «Argumentation for decision support.» Intl. Conf. Database and Expert Systs. Applications. Springer, 2006.
Terje Aven. Risk Analysis. John Wiley & Sons, 2015.
Emilia Bellucci/John Zeleznikow. «Developing Negotiation Decision Support Systems that support mediators: a case study of the Family_Winner system.» AI and Law 13.2 2005/233–271.
Sandra Carberry/Lynn Lambert. «A process model for recognizing communicative acts and modeling negotiation subdialogues.» Comp. Ling. 25.1 1999/1–53.
Davide Carneiro/Paulo Novais/Francisco Andrade/Jon Zeleznikow/Jose Neves. «Online dispute resolution: an Artificial Intelligence perspective.» AI Rev. 41.2 2014/211–240.
Davide Carneiro/Paulo Novais/Francisco Andrade/John Zeleznikow/José Neves. «Using Case-Based Reasoning and Principled Negotiation to provide decision support for dispute resolution.» Knowledge and Info. Systs. 36.3 2013/789–826.
Jennifer Chu-Carroll/Sandra Carberry. «Conflict resolution in collaborative planning dialogs.» Intl. J. of HumanComp. Studies 53.6 2000/969–1015.
Berend de Vries/Ronald Leenes/John Zeleznikow. «Fundamentals of providing negotiation support online: the need for developing BATNAs.» Proc. 2nd Intl. ODR Wkshp., Tilburg, Wolf Legal, 2005.
Elisabeth Fersini/Enza Messina/L. Manenti/Giuliana Bagnara/Soufiane el Jelali/Gaia Arosio. «eMediation-Towards Smart Online Dispute Resolution.» KMIS, 2014/228–236.
Roger Fisher/William Ury/Bruce Patton. Negotiating Agreement Without Giving In. Penguin Putnam Incorporated, US 2008.
John Fox/Cera Hazlewood/Torran Elson. «Use of argumentation and crowdsourcing techniques for risk assessment and policy development.» Proc. 11th Wkshp. Arg. in Multi-Agent Systs 2014.
Marco Giacalone. «Dispute Resolution and New IT Realities.» Doctoral Dissertation, Università di Napoli (2016).
Guido Governatori/Duy Hoang Pham. «Dr-contract: An architecture for e-contracts in defeasible logic.» Intl. J. of Business Process Integration and Management 4.3 2009/187–199.
Nicholas Jennings/Peyman Faratin/Alessio Lomuscio/Simon Parsons/Michael Wooldridge/Carles Sierra. «Automated negotiation: prospects, methods and challenges.» Group Decision and Negotiation 10.2 (2001): 199–215.
Nikos Karacapilidis/Costas Pappis. «A framework for group decision support systems: Combining AI tools and OR techniques.» Eur. J. of Op. Res. 103.2 1997/373–388.
Paul Krause/John Fox/Philip Judson. «An argumentation-based approach to risk assesment.» IMA J. of Management Mathematics 5.1 (1993): 249–263.
Sarit Kraus/M. Nirke/Katia Sycara. «Reaching agreements through argumentation.» Proc. 12th Intl. Wkshp. Distributed AI, 1993.
Arno Lodder/Ernest Thiessen. «The role of artificial intelligence in online dispute resolution.» Wkshp. Online Dispute Resolution at the Intl. Conf. AI and Law, Edinburgh, UK, 2003.
Moshe Looks/Ronald Loui. «Game Mechanisms & Procedural Fairness.» JURIX, 2005.
Ronald Loui. «Against Narrow Optimization and Short Horizons: An Argument-based, Path Planning, and Variable Multiattribute Model for Decision and Risk.» J. of Logics 2016.
Ronald Loui/Diana Moore. «Dialogue and deliberation.» Technical Report WUCS97-11, Washington University in St. Louis 1997.
Peter McBurney/Simon Parsons. «Risk agoras: Dialectical argumentation for scientific reasoning.» Proc. 16th Conf. Uncertainty in AI. Morgan Kaufmann, 2000.
Pedro Brandao Neto/Ana Paula Rocha/Henrique Lopes Cardoso. «Risk assessment through argumentation over contractual data.» 2013 8th Iberian Conf. Info. Systs. and Technologies (CISTI), 2013.
Iyad Rahwan/Sarvapali Ramchurn/Nicholas Jennings/Peter McBurney/Simon Parsons/Liz Sonenberg. «Argumentation-based negotiation.» The Knowledge Eng. Rev. 18.4 2003/343–375.
John Zeleznikow/Emilia Bellucci/Uri Schild/Geraldine Mackenzie. «Bargaining in the shadow of the law-using utility functions to support legal negotiation.» Proc. 11th Intl. Conf. AI and Law, 2007/237–246.