domingo, 24 de abril de 2016

Qui custodiet ipsos custodes : Las tecnologías electorales no pueden ser Cajas Negras

«Quis custodiet ipsos custodes?» locución latina del poeta romano Juvenal, en diversas ocasiones traducida como «¿Quién vigilará a los vigilantes?»

Las elecciones nacionales es uno de los eventos más importantes en las democracias contemporáneas. Estas se desarrollan en contextos sociales y pertenecen a una serie de procesos políticos, sociológicos y tecnológicos estrechamente relacionados entre sí. El método de votación tradicional en la República Dominicana requiere una boleta física que debe ser marcada y depositada en una caja sellada y transparente. Esta caja está custodiada por la vista general del público  y los observadores oficiales que custodian las urnas hasta la conclusión del proceso electoral. Como resultado, los votantes delegan la confianza en las autoridades electorales y en los representantes observadores de que no existirá manipulación o alteraciones en los votos emitidos ni en su posterior conteo.

No obstante, la proliferación de las encuestas a la salida de los locales de votación, llamadas también en la jerga periodística “encuestas a boca de urna”, ha obligado a los organismos electorales a buscar formas rápidas de transmitir los resultados preliminares. La República Dominicana no escapa a este fenómeno, y es esta una de  las razones que sustentan la decisión de la Junta Central Electoral (JCE) de incorporar mecanismos electrónicos para automatizar el proceso de conteo de los votos emitidos.

Las tecnologías aplicadas en los procesos electorales representan sistemas muy complejos que procesan datos sensibles en un enfoque conocido, desde el punto de vista de  la ingeniería, como “caja negra”. El concepto de caja  negra se aplica a los dispositivos o sistemas cuyo funcionamiento interno se oculta o no se entiende fácilmente.

Dado que una elección no es un estudio basado en muestras probabilísticas, sino una consulta total a un universo definido, por principio no puede aceptarse la existencia de márgenes de error: todo resultado deber exacto, confiable y por sobre todo, reflejar la voluntad precisa del colectivo ciudadano.

Sin embargo, se sabe que la “oscuridad de la caja negra” es reclamada bajo el argumento de la seguridad. No obstante, también se sabe que existen mecanismos y procedimientos establecidos para dar libre acceso al código fuente, simplemente para verlo, para verificar su contenido y sin ninguna posibilidad adecuación o modificación, suponiendo que sólo quienes tienen autorizaciones adecuadas (funcionarios electorales, delegados partidarios o de organizaciones de observación) puedan hacerlo.

Se espera que los sistemas electrónicos, permitan superar acciones dudosas que normalmente se presentan en sistemas manuales (manipulación de cifras, errores de transcripción, doble conteo de votos, conteo de boletas defectuosas o arruinadas, etc.). Pero, al mismo tiempo, los problemas de seguridad de los sistemas electrónicos de procesamiento y transmisión de datos son altamente conocidos.

Por esto, toda tecnología diseñada para apoyar un proceso electoral requiere de criterios de alta seguridad, que normalmente no son los aplicados en sistemas comerciales de operación. Por las características propias de este entorno, estas tecnologías demandan madurez  en los sistemas, extenso entrenamiento y alta disciplina en el personal que los maneja. Con el agregado, de que es sabido que el costo de operar sistemas de alta seguridad es casi el cuádruple de los costos de operar sistemas convencionales y, aun así, no hay garantías de eliminar al 100% los márgenes de error ni las posibles brechas de seguridad.

Volviendo a la visión del proceso desde el punto de vista de la ingeniería, existe lo contrario de una caja negra, es decir, el principio de “caja blanca”. El principio de caja blanca supone un sistema en el que los componentes internos, su diseño y toda la lógica que lo soporta están disponibles para su inspección. Lo cual no es el caso de los sistemas de la JCE, ya que simplemente se nos presenta el nivel conocido como “interfaz”, es decir, la capa final visible donde interactúan los usuarios. Las partes internas, oscuras y profundas del sistema, tales como: códigos fuentes, protocolos, mecanismos criptográficos, arquitectura interna, diseños lógicos y físicos, entre otros, son desconocidos.

El principal problema de seguridad en el procesamiento electrónico de los datos electorales es la posibilidad de acceder al contenido del voto, ya sea de modo individual o en totales agregados, que puedan tener operadores, programadores o los denominados “súper usuarios” del sistema, mediante la manipulación de los programas, durante o luego del proceso de escrutinio.

El  sentido de confianza es fundamental para legitimar el proceso electoral democrático basado en la electrónica y tecnologías de la información. No obstante, para los ciudadanos que no son expertos en informática, a simple vista es muy difícil reconocer qué partes de los sistemas son confiables.

El diccionario de la Real Academia Española define el concepto de confianza como la esperanza firme o la seguridad que se tiene en que una persona va a actuar o una cosa va a funcionar como se desea. La confianza es un elemento esencial del proceso democrático, en especial en relación con el secreto y la libertad de expresar un voto que hará legítima la elección de los gobernantes de un país entero. La confianza también está intrínsecamente ligada a la idea de delegar el poder en un sistema político sano.

En este dominio, la confianza tiene una doble connotación: se relaciona con una tecnología propietaria, y es una actitud de los ciudadanos. Se refiere a la seguridad del sistema de prevención de fraudes y garantizar el secreto y la privacidad de las boletas convertidas en votos. El escepticismo de buena parte de la ciudadanía  acerca del rigor del proceso de escrutinio podría crecer notoriamente si se impusiese una tecnología que no puede asegurar la limpieza de los procesos y de los resultados.

Los esquemas de sistemas y tecnologías electorales funcionan en torno a sistemas socio-técnicos que son complejos y críticos, ya que el futuro de las naciones depende de su correcta operación. Más allá del desafío tecnológico, los expertos afirman que el principal problema para introducir nuevas tecnologías computacionales en sustitución de los mecanismos electorales tradicionales, sería la falta de credibilidad debido a la poca capacidad de las personas, para entender lo que está sucediendo detrás del sistema informático.

A nivel mundial, en las últimas décadas ha habido muchas críticas negativas en torno a los sistemas en los que los votantes tienen que confiar ciegamente en los resultados de las elecciones y el correcto funcionamiento de dichos sistemas. En cuanto a la fiabilidad, la adopción social y la confianza, se ha observado que las tecnologías electorales actuales poseen una serie de limitaciones, entre las que se destaca la incapacidad de cumplir plenamente con los requisitos de verificación en caso de que se elimine por completo el proceso físico tradicional.

Para procesos críticos como lo son las elecciones nacionales, se recomienda el uso de mecanismos electrónicos siempre que se cumplan con ciertos requisitos para lograr que, en efecto, se garantice la expresión del ciudadano, y la privacidad y seguridad sean adecuadas.

Para garantizar un proceso tecnológicamente transparente, este debe cumplir con ciertos principios formales establecidos por el cuerpo de conocimiento de la ingeniería informática, entre ellos:


  1. Pruebas extensas: Además de las pruebas iniciales y la certificación correspondiente, el sistema debe ser auditado una vez completado el proceso produciendo una evaluación integral de su operación.
  2. Verificabilidad: Todo sistema debe ser auditable en cada una de las etapas de su funcionamiento. Asimismo se requiere rapidez en el recuento y transmisión de resultados, ya que el sistema debe producir resultados confiables en el menor tiempo posible para no crear incertidumbre en el ámbito político.
  3. Integridad del sistema: Tanto los equipos (Hardware) como los programas (Software) deben ser diseñados a prueba de fraudes. Idealmente no debe haber cambios una vez que se inicia el proceso electoral. Una vez certificados los equipos, el código fuente, los parámetros iniciales, la información referida a la configuración, y los programas básicos y rutinas deberían permanecer estáticos hasta el fin del proceso.
  4. Auditoria del sistema: Los equipos y programas del sistema, incluyendo el código fuente, deben estar disponibles para inspección en todo momento, así como toda la documentación de respaldo (manuales técnicos y de operación). No puede haber reclamos de secreto de parte de proveedores privados o entidades oficiales a cargo del desarrollo.
  5. Profundidad: Se debe tener en cuenta los diversos niveles en los que opera el sistema, de modo que quienes estén autorizados para auditarlo puedan acceder a todos los niveles de la programación y no sólo a los programas de interfaz que corren “superficialmente”.
  6. Control de acceso: Quienes pueden acceder al sistema, sea para operar o para auditar, constituyen el eslabón más débil de la cadena de seguridad. Al ser sus custodios se plantea la vieja máxima ¿Quién controla a los que controlan?
  7. Control del sistema distribuido: Los sistemas que suponen un manejo fraccionado, desconcentrado, de las operaciones requieren mayor control en el diseño para evitar problemas de compatibilidad entre ellas y de una mayor cantidad de tiempo y personal para verificar su operación.
  8. Vulnerabilidad: Se sabe que, finalmente, todo sistema es vulnerable y que existe siempre la posibilidad de subvertirlo por la vía de introducir software maligno que opere de forma independiente o paralela, si necesidad de modificar el código fuente. También se sabe que pueden instalarse bugs que burlan los mecanismos criptográficos de comprobación. Todos los sistemas basados en computadoras personales son vulnerables a la aparición de falsos sistemas paralelos, es decir suplantadores. Pero todo ello requiere de apertura por parte de los organismos administradores del proceso electoral para permitir la realización auditorías constantes y verificaciones aleatorias de los sistemas. 
  9. Legislación: Al igual que las actuales leyes electorales, que suelen ser muy minuciosas respecto a los procesos manuales y, en muchos casos, requieren para su aprobación de mayorías especiales, todas las operaciones de los sistemas electrónicos y tecnologías aplicadas el proceso de computo electoral deberían estar previstas en una legislación precisa y no dejada exclusivamente en manos de regulaciones emitidas por autoridades electorales o por personal de esos organismos.
  10. Supervisión: Sería aconsejable que exista un comité de evaluación de calidad de los sistemas electorales que establezca criterios previos para adoptar nuevos sistemas. Los partidos políticos y sus representantes, los organismos electorales y representantes de organizaciones cívicas promotoras de la limpieza de los procesos electorales tienen mucho que aportar al respecto.


El reto para los que diseñan los sistemas (hardware y software) para procesos electorales siempre será el poder proporcionar suficiente seguridad para que los mismos niveles de confianza pública que se logran a través del voto y conteo manual pueden existir en un entorno electrónico.

Las tecnologías no son modas, responden a razones y necesidades de eficiencia, eficacia, pertinencia y optimización de procesos. Pero sobre todo, se crean para servir y mejorar procedimientos. No son un fin en sí mismas. La adopción de las mismas supone: sentido de responsabilidad, conocimiento de las implicaciones y conciencia de las consecuencias de no acogerse a modelos formales existentes para su adopción.

Cuando los ciudadanos no confían en un mecanismo electoral, la legitimidad del proceso electoral y la credibilidad del gobierno pueden ser cuestionadas. Esta situación podría generar inestabilidad social y política de una nación entera. Esa es la razón por la que la introducción de nuevas tecnologías en el proceso debe tener como objetivo no sólo facilitar los procedimientos, sino también hacerlos más transparentes, lo que garantiza una mayor fiabilidad para los ciudadanos y todas las partes interesadas.   

domingo, 14 de febrero de 2016

Affectability in educational technologies: A socio-technical perspective for design

Hayashi, Baranauskas (2013) states that digital artifacts have the potential for augmenting the interest of students and the quality of learning environments. However, it is still common to find technology being inserted in learning settings without a closer connection to the learners’ contemporary world. This statement was the result of the evaluation of the socio-technical project OLPC implemented in Brazil.

A socio-technical project is a type of project intended to social welfare, so that the profitability of the project is not only economic, but also the impact generated by the project in improving the social welfare in the group benefited or performing area. In the case reported by Hayashi, Baranauskas (2013), a positive impact in the teaching-learning community was expected from the inclusion of low cost personal laptops and using informal settings as local teaching strategies.
One of the challenges educational technology faces nowadays is to enrich the formal learning settings while maintaining a closer connection to the learners’ informal contemporary world. In this paper we brought to discussion the understanding of Information Systems’ formal, informal and technical layers of a framework, which may contribute to more holistic view of digital technology use in education. (Hayashi & Baranauskas, 2013)

Schools can use ICT to strengthen integration, curriculum and culture, and become an expanded, inclusive and integrated into other contexts of formal or informal training space. Mere use of a technological innovation is no guarantee of creating educational innovation, which emerges when the school and educators are free to connect ideas, new concepts and ways of doing education, they can go beyond the preset, go its walls, seek external partnerships and daring experimentation, reflection and change.

Children must not only own the laptop, but take them to their homes. In doing so, the whole family benefits. The project have shown unequivocally that parents become more involved in the education of their children and, very often, learn to use the laptops themselves. The role of the individual in society changes; It is a more productive role. The child is not the object of change but the agent of change.
ICT has played an important insight into educational systems. Its inclusion has been proposed to support the improvement in the quality of education, make it more relevant to the demands and requirements of the XXI century and to develop practices restarted teachers and students.
Some issues have hindered the integration of XO computers in the classroom, the teachers interviewed reported by institutional level relate to technical problems and immediate support (at the time the problem occurs), lack of training, lack of time, poor quality of computer components, improper maintenance and inappropriate hardware and educational software. 
The project conclusions found a positive effect of the OLPC program in terms of student motivation, increased teacher-student, and increased responsibility for attending school, opportunities for autonomy and independence interaction.
Considering the opportunities and limitations, a major concern remains the preparation of teachers, especially those related to the transition that should make using the computer for repetitive practices towards a more integrated approach to the school curriculum.

References

Hayashi, E. S., & Baranauskas, M. C. (2013). Affectability in educational technologies: A socio-technical perspective for design. Journal of Educational Technology & Society, 16(1), 57–68.

Scenario Planning

In organizations dealing with the uncertainty in one form or another, the strategies are based on forecasts of its own managers or, on the assumption that the past will be repeated. Often not considered or, at least, underestimate the radical or discontinuous changes that could greatly alter the structure of the sector. 

A simplified definition of scenario planning considers the scenario as the description of potential or possible future, including details of how to get there. However, in everyday language, a forecasting formulates a probable knowledge of a future event. In the language of business, it is usually understood as an early prognostic to estimate the value of a variable.

To analyze future scenarios are built, which are provisional and exploratory descriptions of a probable future. A scenario is a significant and detailed portrait of a plausible (approvable), permissible, recommended, consistent, future world. In it, you can clearly see and understand the problems, threats, and opportunities that such circumstances may occur. It is not a prediction or a specific forecast, is the description of events and trends that may occur. (Schmalbach, 2010).

Porter (1985) asserts that the purpose is to understand the combination of strategic decisions that will give maximum benefit, despite the uncertainties and challenges of the external environment. A scenario, in addition to plausibility, must have internal consistency, usefulness for decision-making and provide a description of the causal processes.

According to Selsky (2008), the scenarios are constructed from conjecture. They are hypothetical statements, and its role is to identify a range of options and probable situations. They are hypothesis based on diagnoses of forces shaping events and the possible linkages of these forces, which can be assigned probabilities.

Scenario planning is not interested in determining the probable date of an event, but the likely linkages between them. A scenario will not occur as anticipated, but suggests a probable sequence in order to sensitize those who must make decisions about what can happen.

Finally, the scenarios are, therefore, situations that have not happened yet but have a certain probability of occurrence. And it is not impossible to occur, the business, the organization must be prepared for it. Otherwise, you may be surprised by the future.

References

Porter, M. (1985). Competitive advantage: creating and sustaining superior performance. The free Press, New York.

Selsky, J. (2008). Business Planning for Turbulent Times: New Methods for Applying Scenarios. Cromwell Press.

Schmalbach, J. (2010). Scenario Planning: Review of concepts and methodological proposals


Precision Farming

I found very interesting the video from Kumar (2015) about the use of robots for Precision Farming. Over time the machines increasingly perform activities and works traditionally done by human beings, and agriculture is no exception. Precision agriculture is an initiative using UAVs to 'scan' plots in order to improve the genetic performance of what you sow.


With growing demands on the world’s food supply chain, it’s crucial to maximize agriculture resources in a sustainable manner. Currently, precision agriculture technologies are used by larger companies as it requires a robust IT infrastructure and resources to do the monitoring. (IBM Research, n.d.)
Given the limitations of satellites to deliver high-resolution images, the use of unmanned aerial vehicles (UAVs), better known as drones, may be a viable low-cost alternative to monitor agricultural areas, as these are equipped with small multifunctional sensors, supplemented by autonomous navigation systems, and can facilitate frequent monitoring of parameters and variables of interest crop.

However, there are some barriers and forces preventing the expansion and adoption of such technologies. For example, in countries like the US are limitations imposed by some agencies.  Part of the problems is that the police or private or public companies are authorized for the use of drones, this dispassion may violate established privacy laws.

In any case I think that the horizon for agricultural purposes will be seeing more clear because in principle, when it comes to agriculture the operating environment of the drone is not that problematic when it comes to privacy.

 Precision agriculture will bring many benefits on agricultural production, profitable for farmers who decide to acquire new computer skills to enjoy the benefits that this technology brings. Another negative force or disadvantage will be associated with the high cost and training required, which will leave out some producers. On the other hand, management of genetics in food will have to side with the ethical and philosophical field to study and prevent its abuse.

Always, before a technological innovation there is resistance to change, particularly in sectors with less intellectual preparation, such as farmers. Precision Agriculture is an even greater disadvantage in this respect and that did not emerge as a need for small farmers, but were transnational producers of agricultural machinery who is interested on imposing the new technology on the market.

References

IBM Research. (n.d.).Precision agriculture Using predictive weather analytics to feed future generation. Retrieved from http://www.research.ibm.com/articles/precision_agriculture.shtml

Kumar, V. (April, 2015). The Future of Flying Robots. [ Video File ] Retrieved from http://www.ted.com/talks/vijay_kumar_the_future_of_flying_robots


Wearable computing

The new user interface proposals are aimed at finding those that adapt to the ways of human communication, rather than lead the effort by the user in learning. Following this concept efforts devoted to the study of the so-called Wearable Computing, where you try to achieve the concept of user-machine acting as a single entity. The main argument against such scenario does not allow the interaction with the user in an intuitive and natural way, it is the person who must adapt to this form of communication that still is not comparable with spoken language or gestures commonly used.  (Starner, 2011)

For example a head band that sensors brain waves to detect vehicle driver’s tiredness. The technology is able to monitors brain activity and warns drivers that they are becoming fatigued in advance of it becoming dangerous, giving users more time to do the one thing that helps: stop and take a break. Tired drivers pose a risk to both themselves and those around them, and we've seen different types of technology that aim to keep groggy drivers alert. In theory at least, allow drivers to get off the road before they become a safety hazard. Drowsy driving is the second major cause of fatal accidents in USA roads.

Wearable devices have been getting a lot of attention lately, and many vendors. Until recently, wearable computing devices have garnered the most attention from the fitness and medical worlds, but markets are shifting as users start to adopt this new trend. Experts predict that if wearable technology is really going to take off with consumers, one of the most important factors developers will need to consider is aesthetics. In other words, these new gadgets will need to either blend inconspicuously with an outfit or they will need to look good without drawing too much attention. In addition, security features for data transmission, storage and processing needs to be reinforced.  Estimated time to massive adoption 7-12 years.

One problem that wearable technology suffers is that, as the name denotes, it must be wearable. This in turn means that there's only so much space on which to put wearable devices, and even then, not all the space will be engaged. Wearable tech as a whole is still essentially in its earliest days, and taking many of its critical elements and going beyond that is really only to be expected. To successfully pull off this new technology, computer engineers, developers and programmers will need to expand their knowledge into specialties outside of IT. (Human Anatomy, Psychology and Sociology).

According to Statista.com by 2018, it is estimated that the market for wearable technology will be worth some 12.6 billion U.S. dollars.

References

Statista (2013). Facts and statistics on Wearable Technology. Retrieved from http://www.statista.com/topics/1556/wearable-technology/


Starner. T. ( 2011). The challenges of wearable computing. Retrieved from http://www.cc.gatech.edu/~thad/p/magazine/published-part1.pdf


domingo, 7 de febrero de 2016

Predictions of the past on today’s media

Now times have changed. The digital revolution has meant that many of the great paradigms of communication are seen altered. The internet brought new media and a new way to access information. And although we can discuss everything related to the different business models needed to survive in this new scenario, once again, the problem is a result of new trends and habits. The following predictions have shaped what is now known as a digital press.

Kids watch the comics section of a faxpaper
being printed on a home receiver 
RCA Radial newspapers in 1930. In the early thirties, the radio was the worst enemy of newspapers seeking prevent broadcasters from transmitting news. Some radio stations, particularly WOR in New York, put into operation the news service called Transradio News, prompting the newspaper owners to make synergy with radio and experienced a new technology called Radio Facsimile either to deliver newspapers through radio waves. The idea was that unused frequencies of the radio spectrum could be concession to bring in the evenings, a newspaper through the "facsimile radio." These "Radio-fax-newspaper" could be printed at home while everyone was asleep, and find it ready to read upon waking without getting your hands dirty with ink. (Novak, 2015).

Philco-Ford Printer newspapers in 1967. Recently a chapter of the "XXI Century" hosted by Walter Cronkite entitled "At Home in 2001" which was originally broadcast on March 12, 1967, was broadcast on Discovery Channel. The program led to viewers of the late sixties to the futuristic world of 2001 in which the news would be sent via satellite and could be printed at home, at the touch of a button. The console offering a summary of news from around the world broadcast satellite (On April 6, 1965, was put into orbit the "Early Bird", the first commercial communications satellite) and pressing a button was enough to get a hard copy of the newspaper. (Schneider, n.d)


Los Angeles Times Magazine printed in LaserJet, 1988. The copy of April 3, 1988, issued by Los Angeles Times Magazine was devoted to what might be the city of Los Angeles in 2013. The prediction included much of what could be expected of futurism eighties: fingerprint verification at ATMs and access to public buildings, computers in classrooms, "smart" phones and home robots. As for predictions about the future of newspapers included electronic delivery to a personal computer, copies of newspapers. The idea was that the family received the newspaper on the computer only with the information of particular interest and news about your neighborhood; the computer would be programmed to print on a laser machine while the family sleeps. (Laser printers were invented in 1969). (LA Times, 1988).

Newspapers in a tablet, 1994. Still the internet was used with external modems that require a dedicated telephone line, when in 1994 the company Knight Rider, who until 2006 was a recognized media corporation specializing in newspapers and the Internet, provided a video showing what would be a daily tablet. The video can be watched online using this link. (Information Design Lab, 1995).
The traditional mass print media used to offer companies the ability to reach important audiences, but these, now increasingly fragmented and shrinking, along with increasingly more competitive metrics are pushing companies to focus their efforts on digital media. The print media has been lost, and there seems to be a trend that will change.

References


Information Design Lab. (1995). The Tablet newspaper: a vision of the future. [Video File]. Retrieved from https://www.youtube.com/watch?v=7_QyktOw0JM

LATimes. (1988). Techno comforts and urban stress in 2013. Retrieved from http://documents.latimes.com/la-2013/

Novak, M. (2015). Fax papers: A Lost 1930s Technology That Delivered Newspapers via Radio. Retrieved from http://gizmodo.com/faxpapers-the-lost-dream-of-delivering-newspapers-thro-1682383694

Schneider, J. (n.d). The Newspaper of the Air:  Early Experiments with Radio Facsimile. Retrieved from http://www.theradiohistorian.org/Radiofax/newspaper_of_the_air1.htm

domingo, 24 de enero de 2016

Think Tank Methods for Group Decision Making

Group Decision-making is probably preferable to individual decision making in many cases.. There are three techniques that when used properly (brainstorming, the Delphi process, nominal group technique) have been extremely useful for increasing the creative capacity of the group in generating ideas, understanding the problems and take better decisions. Practical considerations, of course, often influence the technique used. For example, factors such as the number of available work hours, costs and physical proximity of the participants influence the choice of one technique or another.

Delphi Process


This technique consist on requesting and comparing anonymous judgments on the subject of interest through a series of sequential questionnaires applied to experts in the subject. The Delphi process preserves the advantage of obtaining a diversity of judgments, opinions and approaches while suppressing biases that can occur during the face to face interaction. The predictive power of Delphi is based on the systematic use of intuitive judgment given by a panel of experts. It requires a massive turnout for the results to be statistically significant. But the group must have a high degree of correspondence with the topics to be covered in the exercise.

Braingstorming


Brainstorming is widely used in the advertising sector, where apparently quite effective. In other fields, he has been less successful. The basic rules are:

  1. No idea is too ridiculous. Group members are encouraged to express extreme or crazy ideas.
  2. Each idea presented belongs to the group, not the person exposed. In this way, the group can use them and build on the ideas of others.
  3. No idea can be criticized. The purpose of the meeting is to generate not evaluate ideas.


Nominal Group


According to Tamella (2013), basically, the nominal group technique is a structured group meeting which seven to ten people sit around a table without speaking to each other. Each person writes down ideas in a notebook paper. After five minutes, a structured exchange of ideas is performed. Each person has an idea. A person appointed as secretary writes the ideas on a flipchart or whiteboard in view of the whole group. This continues until all participants indicate that there are more ideas to share. The next stage is the formal debate in which each idea receives attention and debate before being voted on. This is accomplished by asking for clarification or indicating the degree of support for each idea on the flipchart. The last step is the independent vote in which each participant privately selects priorities ranking or vote. The group's decision is the result mathematically ordered individual votes.

Van der Ven (1974) states that both the Delphi method and nominal group technique have shown more efficient than brainstorming.  He also points that the basic differences between the Delphi method and nominal group technique are:


  1. Participants in the Delphi process are usually anonymous to each other, while participating in a nominal group are known.
  2. Participants of a nominal group face to face around a table are, while participants of a Delphi process are physically distant and never meet.
  3. In the Delphi process, all communications between participants are through written questionnaires and comments by supervisory personnel. In the nominal group participants communicate directly.

References


Tammela, O. (2013), Applications of consensus methods in the improvement of care of paediatric patients: a step forward from a ‘good guess’. Acta Paediatrica, 102: 111–115. doi: 10.1111/apa.12120

Van der Ven, A. (1974). The Effectiveness of Nominal, Delphi, and Interacting Group Decision Making Processes. Academy of Management Journal. doi: 10.2307/255641

Learning Analytics

According to Johnson, Adams Becker, Cummins & Estrada (2014), learning analytics is the interpretation of a wide range of data produced and gathered about students to guide their academic progress, predict future actions and identify problematic elements. The aim of the collection, recording, analysis and presentation of these data is to enable teachers to adapt quickly and effectively educational strategies at the level and ability demanded by each student. Even in its early stages of development, learning analytics often respond to the need to carry out monitoring and control activities in the virtual campus for strategic decision making. In addition, also seek to exploit the vast amount of data produced by students in academic activities.

Learning Analytics
Overall, the information provided to customize the training and learning environments designed according to the needs, interests and ways of interaction of teachers and students. The statistical record of the activity of students and teachers also identify hot spots of a teaching-learning process.

Even though experts predict that this technology should be massively adopted in 3 to 5 years, the potential of learning analytics has its obstacles. Indeed, the privacy of student data is an important issue that has received attention recently, but there are others. One of these challenges is to broaden the perspective of educators about the possibilities of a personalized learning guided by analytics. This marks a significant difference from the instructional strategies guided by traditional data. This is because there are many more data available to extract information, make sense and usefulness.

Among the forces impacting this technological trend, we can find the increasingly blurred boundaries between formal and informal learning. This means that the same person may be participating in a course of a virtual campus, following a series of twitters and blogs, communicate in forums with fellow students and synchronously with friends and colleagues, etc. Buckingam and Ferguson (2011) point out that the use of digital fingerprinting can be applied to a wide variety of contexts and allow analyzing the behavior in a wide variety of situations.

Computer-assisted instruction is not new, but the proliferation of technology and sophisticated methods for data analysis are. The quality and quantity of available data opens new opportunities to provide effective personalized learning experience, but it certainly some challenges. However, we have seen firsthand the benefits that these technologies can have on millions of students, so we think it is a journey worth doing.

References

Ballard, C. (2012). Learning Analytics - Improving Student Retention. Retrieved from http://www.slideshare.net/ChrisBallard/learning-analytics-improving-student-retention

Buckingham, S. & Ferguson, R. (2011). Social Learning Analytics. Technical Report KMI-11-01, Knowledge Media Institute, The Open University, UK. Retrieved from http://kmi.open.ac.uk/publications/pdf/kmi-11-01.pdf

Johnson, L., Adams Becker, S., Cummins, M., and Estrada, V. (2014). 2014 NMC Technology Outlook for International Schools in Asia: A Horizon Project Regional Report. Austin, Texas: The New Media Consortium.

lunes, 18 de enero de 2016

About Emerging Technologies and ET ROOM

ABOUT THIS BLOG


Innovation and Emerging Technologies are the only way to ensure that companies remain competitive. But this is not easy, since there is a big difference between talking about innovation and putting it into practice. Innovation is a bet on the future. I am looking forward to discuss the various ways and various areas where innovation can contribute in our organizations.
Emerging technologies are defined as "scientific innovations" that can create a new industry or transform an existing one. They include discontinuous technologies derived from radical innovations as well as more advanced technologies formed as a result of the convergence of branches of previously separate research.

Each of these technologies offers a rich range of market opportunities that provide incentive for investment risk . The problem with these new technologies, both managers of mature firms and those of start-ups is that traditional management tools are not able to solve successfully the new challenges generated.

They are used interchangeably to indicate the emergence and convergence of new technologies with demonstrated potential as disruptive technologies terms. Among them are nanotechnology, biotechnology, information and communications technology, cognitive science, robotics, and artificial intelligence.

Precisely what “the future” consists of is unknown, or at best uncertain. Certain emerging technologies will enable the development of other new technologies. In this sense, some technologies are enabling technologies that allow other technologies to be more fully exploited. (Koty, 2006).

ABOUT THE AUTHOR


Pedro Taveras received a bachelor’s degree from the Pontificia Universidad Católica Madre y Maestra, Dominican Republic, in systems and computing engineering (CUM LAUDE), he successfully completed the coursework for the master of science degree Information & Telecommunications Technologies from Rochester Institute of Technology in May 2001 and a master's degree in Information Systems from Stevens Institute of Technology with concentration in technology management. He worked as a Software Engineer for the PCA Group, a technology leader firm with markets in Buffalo, NY, Rochester, NY and Kansas City, KS. He lectures  Information and Computer Science related subects at Pontificia Universidad Catolica Madre y Maestra. He is CEO-Founder and Software Architect for Xtudia, LLC, a software development firm that produces solutions for industrial automation, business and education. His research interests include software engineering, information processing, knowledge management systems; 2d/3d programming, Human-Computer Interaction, Human Aspects of Computing, natural end user interface and pervasive (ubiquitous) computing. He is currently enrolled in the second year of th Doctoral Program in Computer Science with a concentration in Information Assurance at Colorado Technical University.

Interest: information security, software architecture,  software engineering, information processing, knowledge management systems; 2d/3d programming, Human-Computer Interaction, Human Aspects of Computing, natural end user interface and pervasive (ubiquitous) computing.

References


Koty, W.(2006). Ahead of the future. Report on emerging technology research. Retrieved from http://www.gov.bc.ca/premier/attachments/Emerging_Tech_Research.pdf