This is a cross post of my article on singularity weblog
Software is eating up education. Ubiquity of connected devices, school budget pressures and dependency on the publishing industry make education a ripe target for software based disruption. As a result, an increasing number of software companies have been founded in recent years around the idea of making digital education smarter, cheaper and more accessible. The educational market has reacted positively to these new services: enrollment to online courses grew at 17% compared to only 1.5% growth in overall higher education, digital textbooks are expected to account for 35% of the textbook market by 2016 while blended learning is projected to reach a 98% penetration by 2020. As expected, such growth has created interest in the capital market: the edtech space has known several large transactions over the last three years and seen the birth of about half a dozen startup incubators.
Many among these companies pursue the holy grail of an autonomous instructional system. The premise is that pattern analysis algorithms running on large amounts of student performance data will create a new system of learning centered around a narrow artificial intelligence. Such system would independently guide students, coming up with personalized recipes to mastering cognitive skills. In concept, this is not different from how mapping software helps you find the quickest route to the gas station. In practice, the scope and complexity required to achieve human like instruction is more similar to a self driving car than a GPS navigation system.
In recent years, narrow artificial intelligence has been gaining ground at exponential pace and is soon to become mainstream. Our favorite search engines, ecommerce sites and social networks are all based on massive adaptive algorithms. The aforementioned robocars are already driving themselves through the streets of Nevada and California, while the military has been using autonomous aircrafts (drones) for over two decades. Personal helpers such as Apple’s Siri are on every smartphone and consumer friendly robots are the next big thing. In 2011 IBM’s cognitive system known as Watson defeated two human champions in the game of Jeopardy! Now IBM announced it intends to make a smartphone version of this powerful intelligence. Ray Kurzweil, a world renowned inventor and an artificial intelligence pioneer, has recently joined Google. A move that hints exciting artificial intelligence applications are soon to be introduced by the search engine giant.
In edtech, companies like Knewton, Grockit, Dreambox Learning, and Carnegie Learning are years into the making of adaptive software. In January 2012, the Hewlett Foundation conducted a competition for automated essay scoring which yielded human equivalent results, a long sought-after feat. 2012 is also known to be the year of the MOOC with Stanford (Udacity, Coursera), MIT and Harvard (edX) offering massively open online courses which rely heavily on automation and machine learning.
While the current crop of adaptive educational software can be seen as primitive, the likely manifestation of artificial intelligence in the field of education is an autonomous instructional system. There is, however, a significant prerequisite: such system requires large quantities of data from which patterns can be extracted and algorithms trained on. While in many fields of science data can be easily collected, the education industry has not historically collected significant data. Neither in quality nor in quantity. Further, it is still very much an open question what data needs to be collected.
Most educational software collects student results that are then correlated to learning context, learning style and resources. Such correlations are effective in yielding shallow personalization such as content recommendations and basic alerts. In essence, this is no different from product recommendations in ecommerce systems: useful, but by no means intelligent. To build a narrow artificial intelligence, we need to collect not only information about the learning results but information about the learning process. We need to monitor students continuously instead of sparsely. It is likely that we will need to borrow techniques from outside the field of education: monitoring human machine interaction and a/b testing (advertising, gaming); physiological sensors and brainwave scanners (quantifiedself, ehealth); augmented and virtual reality (gaming) and social graphs to name a few.
And so, the race is on. Many edtech startups attempt to build the platform of education, one that could collect data behind the scenes the same way Google, Amazon and Facebook collect data about their users. Others offer standardized cloud based educational data stores and application programing interfaces that can be used by educational app developers. Either way, the serious players are all trying to reach the critical mass of data required for the breakthrough. With heavy hitters like Sebastian Thrun, Anant Agarwal and Bill Gates joining the race, it is likely the winner will emerge over the next few years.
Privacy is always a major concern when it comes to data centric technology, and the data collected by such systems is very sensitive. Assuming we start collecting data at early childhood, we will end up with more than a decade of personal research backed by fine grained data. It will provide insight into students personalities, intelligence, strengths and weaknesses and could be used by commercial and government bodies alike to manipulate them to their needs. Can we stop this data from reaching potential employers, government agencies and other curious parties? Some suggest the rules of supply and demand will force future students to share their data as means of getting admitted to their school of choice or dream job, much in the way students share their GPA score today.
Traditional educational software is built around content and assumes humans will perform the instruction. In most cases, it is simple for schools to judge the content’s quality and how well it is matched to their requirements. Autonomous educational software is build around the instructional component and aims to replace the human with a machine. This begs the question: can we trust commercial companies to define these algorithms for us, or should educational software be regulated by the government? Traditionally, governments define the curriculum and train and monitor teachers. Performance data is kept within a government controlled ecosystem. Is society ready to let go of these principles and trust the machines and their builders? Many believe our government and existing education administration and will never allow for such revolution take place. Others say it is deep in our human nature to adopt tools that give us an advantage and this technology will be no exception.
Generally, teachers perform three functions: knowledge transfer and skill development; imparting community values; and social and behavioral training. In a world of an autonomous instructional system, teachers will give up the function of knowledge transfer and skill development to machines. Those educators that excel at this function will work for technology companies instead of schools. Their job will be to develop new teaching techniques [methodologies] which they will get to test and implement at a much larger scale than they do today. An example of how this could look like is the work of Eric Mazur of Harvard University on peer instruction and the aforementioned MOOCs.
Other teachers will focus on the remaining functions which are unlikely to be replaced by narrow artificial intelligence due to their social nature. They will also continue and perform a supporting role in knowledge transfer and skill development. An intangible is that a good teacher not only has the knowledge and skills to help a student succeed, but also care. This sense of care may be difficult for a machine to produce. That said, these social functions will also depend more on technology and data. One example is insights derived from the social graph and particularly the communication patterns between its members. Another is video analysis of social situations.
In both cases, we will see more specializations. The role of educators will shift towards a greater separation between theorists and practitioners. The former driving methodologies the latter applying them in classrooms. Great educators will focus on the science of teaching. They will become ubereducators and their methodologies will have global impact. Practitioners will be focused on the art of teaching, supporting students and guiding their emotional and social growth.
Many point out that our education system is rooted in the industrial era and that it is no longer able to perform its social role. Sir Ken Robinson explains it best. I believe the learning revolution is near. With access to a diverse world of knowledge, intelligent machines performing personalized instruction and humans focused on noncognitive skills, the future of education looks bright. That said, as a society we need to be prepared for such change and put checks and balances so that our core values, traditions and social nature are not lost in the process.
Tomer Doron's Technology Blog
Wednesday, April 3, 2013
Monday, December 19, 2011
google style gauges using d3.js
I’ve been playing lately with SVG visualization using the excellent d3.js library. I've originally chosen d3.js because I needed to create highly customized visualizations and d3.js is a great tool for dealing with lower level visualization.
some of the visualizations included gauges. In many of my products I use google's gauges, but in this case I needed to support offline mode, and google's charts require an internet connection. so I decided to go ahead and rebuild it using d3.js.
below are links to the code and example
http://bl.ocks.org/1499279
https://gist.github.com/1499279
note: the transition effect of the "pointer" still needs love, feel free to fork or otherwise contribute.
some of the visualizations included gauges. In many of my products I use google's gauges, but in this case I needed to support offline mode, and google's charts require an internet connection. so I decided to go ahead and rebuild it using d3.js.
below are links to the code and example
http://bl.ocks.org/1499279
https://gist.github.com/1499279
note: the transition effect of the "pointer" still needs love, feel free to fork or otherwise contribute.
Tags:
d3.js,
gauge,
google,
javascript,
visualization
Thursday, March 3, 2011
a less simple safe-html sanitizer
Safe-html has been promoted by google and others as a solution for xss, specifically when dealing with user generated content. Unfortunately GWT provides a rather naive implementation of an html sanitizer named SimpleHtmlSanitizer which I found too simple for even simple use cases. Relying on the GWT framework and modeled after the SimpleHtmlSanitizer, here is what I came up with https://gist.github.com/1499453
Saturday, February 5, 2011
fun with PPC
I recently received an old Apple G5. Even after a decade its dual 64bit PPC CPUs and 8GB of RAM make it quite a capable machine. So when I sent my primary box to the lab, I decided to spend a day on setting it up as a development workstation.
With Apple not supporting the PPC processors line since Snow Leopard, I decided to go with a Linux setup. I tested a few alternatives: Fedora 12, Yellow Dog, Ubuntu and Debian Lenny and Squeeze. Eventually choosing Squeeze as it offered good hardware support and a more minimalist UI. Too bad CrunchBang does not support PPC, it is by far my favorite distribution for a desktop. Overall, installation of Squeeze was a breeze, more info can be found here.
My primary programming languages these days are Scala for the back end and GWT for the front end, so the next step was getting them to work.
Scala
The latest Scala package on debian repositories is 2.7.7, so you will need to download the latest from Scala's site and configure appropriately. So far so good, but a simple test yields bad news, Scala was super slow. After further digging I learnt that the root cause was the JVM: Squeeze comes pre-packaged with OpenJDK which is extremely slow to the point of unusable on PPCs as it is running in interpreted mode. Luckily, there is a simple solution, installing IBM JDK found here. Make sure to download the 32-bit if you are using a G5 and that libstdc++5 and libgtk1.2 packages are installed
GWT
To develop for GWT you need to have Eclipse with the GWT plugin installed. This part is easy, just make sure you download the 32bit version of Eclipse if you are using a G5. The troubles begin when you are finally ready to debug: GWT debugging is dependent on a browser plugin that unfortunately is not supported on PPC Linux. Luckily, the GWT code is open, so with jumping a few hoops, you can make it work using the following steps:
1. make sure xulrunner and xulrunner-devel packages are installed
2. download GWT source:
$ svn checkout http://google-web-toolkit.googlecode.com/svn/trunk/ trunk
$ svn checkout http://google-web-toolkit.googlecode.com/svn/plugin-sdks/ plugin-sdks
3. copy the plugin SDKs from the x86 version as a PPC version:
$ cp -R plugin-sdks/gecko-skds/gecko-1.9.1/Linux_x86-gcc3 plugin-sdks/gecko-skds/gecko-1.9.1/Linux_ppc-gcc3
$ cp -R plugin-sdks/gecko-skds/gecko-1.9.1/Linux_x86_64-gcc3 plugin-sdks/gecko-skds/gecko-1.9.1/Linux_ppc64-gcc3
Note that gecko-1.9.1 maps to firefox 3.5, so if you are attempting to compile for a different version you will need to change the path accordingly. Read the make file for the complete version numbers mapping.
4. Replace all the files in Linux_ppc-gcc3/lib, Linux_ppc-gcc3/bin, Linux_ppc64-gcc3/lib, Linux_ppc64-gcc3/bin with the "real" ones from your system, they are all found either in /usr/lib/xurlrunner-devel-1.9.1/sdk/lib or in /usr/lib.
This assumes you running xulrunner version 1.9.1, otherwise, your path may differ.
5. prepare to compile:
$ cd trunk/plugins/xpcom
$ export BROWSER=ff35
$ export DEFAULT_FIREFOX_LIBS=/usr/lib/xulrunner-devel-1.9.1/sdk/lib/
Again, this assumes you are running xulrunner version 1.9.1 and trying to compile for Firefox 3.5, you will need to changes these if you system is running different versions.
6. edit the install-template.rdf file, adding an entry for Linux PPC after the other platform entries:
<em:targetPlatform>Linux_ppc-gcc3</em:targetPlatform>
7. finally, compile GWT from source
$ make clean
$ make
This will create a Firefox plugin named "gwt-dev-plugin.xpi" in the prebuild directory, install it using Firefox.
With Apple not supporting the PPC processors line since Snow Leopard, I decided to go with a Linux setup. I tested a few alternatives: Fedora 12, Yellow Dog, Ubuntu and Debian Lenny and Squeeze. Eventually choosing Squeeze as it offered good hardware support and a more minimalist UI. Too bad CrunchBang does not support PPC, it is by far my favorite distribution for a desktop. Overall, installation of Squeeze was a breeze, more info can be found here.
My primary programming languages these days are Scala for the back end and GWT for the front end, so the next step was getting them to work.
Scala
The latest Scala package on debian repositories is 2.7.7, so you will need to download the latest from Scala's site and configure appropriately. So far so good, but a simple test yields bad news, Scala was super slow. After further digging I learnt that the root cause was the JVM: Squeeze comes pre-packaged with OpenJDK which is extremely slow to the point of unusable on PPCs as it is running in interpreted mode. Luckily, there is a simple solution, installing IBM JDK found here. Make sure to download the 32-bit if you are using a G5 and that libstdc++5 and libgtk1.2 packages are installed
GWT
To develop for GWT you need to have Eclipse with the GWT plugin installed. This part is easy, just make sure you download the 32bit version of Eclipse if you are using a G5. The troubles begin when you are finally ready to debug: GWT debugging is dependent on a browser plugin that unfortunately is not supported on PPC Linux. Luckily, the GWT code is open, so with jumping a few hoops, you can make it work using the following steps:
1. make sure xulrunner and xulrunner-devel packages are installed
2. download GWT source:
$ svn checkout http://google-web-toolkit.googlecode.com/svn/trunk/ trunk
$ svn checkout http://google-web-toolkit.googlecode.com/svn/plugin-sdks/ plugin-sdks
3. copy the plugin SDKs from the x86 version as a PPC version:
$ cp -R plugin-sdks/gecko-skds/gecko-1.9.1/Linux_x86-gcc3 plugin-sdks/gecko-skds/gecko-1.9.1/Linux_ppc-gcc3
$ cp -R plugin-sdks/gecko-skds/gecko-1.9.1/Linux_x86_64-gcc3 plugin-sdks/gecko-skds/gecko-1.9.1/Linux_ppc64-gcc3
Note that gecko-1.9.1 maps to firefox 3.5, so if you are attempting to compile for a different version you will need to change the path accordingly. Read the make file for the complete version numbers mapping.
4. Replace all the files in Linux_ppc-gcc3/lib, Linux_ppc-gcc3/bin, Linux_ppc64-gcc3/lib, Linux_ppc64-gcc3/bin with the "real" ones from your system, they are all found either in /usr/lib/xurlrunner-devel-1.9.1/sdk/lib or in /usr/lib.
This assumes you running xulrunner version 1.9.1, otherwise, your path may differ.
5. prepare to compile:
$ cd trunk/plugins/xpcom
$ export BROWSER=ff35
$ export DEFAULT_FIREFOX_LIBS=/usr/lib/xulrunner-devel-1.9.1/sdk/lib/
Again, this assumes you are running xulrunner version 1.9.1 and trying to compile for Firefox 3.5, you will need to changes these if you system is running different versions.
6. edit the install-template.rdf file, adding an entry for Linux PPC after the other platform entries:
<em:targetPlatform>Linux_ppc-gcc3</em:targetPlatform>
7. finally, compile GWT from source
$ make clean
$ make
This will create a Firefox plugin named "gwt-dev-plugin.xpi" in the prebuild directory, install it using Firefox.
Saturday, November 6, 2010
integrating elasticsearch and lift
I recently came across elasticsearch and was waiting for a good opportunity to try it out. The opportunity came in the shape a the new project I am working on which requires a lucene grade search engine and is in lift/sacala which lends itself well as I could use the java elasticsearch client api.
Elasticsearch is quite new, and I could not find real documentation on how to integrate it, however, going over the java source code and high level documentation on the web site, I was able to come up with the following simple integration:
First define a search engine abstraction:
Naturally, the next step is to add a call into "SearchEngine.startup" from your Boot.scala so the app connects to the search server on startup.
Next define a pair of model & companion traits to hide the integration details from the domain objects:
Note this example uses a custom base model class, which is not super important to what I am trying to show here (it facilitates dynamic json assembly and naming conventions, but those are pretty straight forward). however, it is important to note that that base model mixes in LongKeyedMapper and IdPk. headers for the class & companion below.
Last mixin the SearchableMode traits with the domain model so you can do things like:
Employee.search(...)
This of course if a very basic example which uses a handful of the available features elasticsearch and lucene offer, however, this should give anyone trying to integrate elasticsearch with lift a good head start.
Elasticsearch is quite new, and I could not find real documentation on how to integrate it, however, going over the java source code and high level documentation on the web site, I was able to come up with the following simple integration:
First define a search engine abstraction:
object SearchEngine extends Logger { private var _node:Node = null private var _client:Client = null def startup() { if (!enabled) { warn("search engine is disabled. if this is not intentional, please change the configuration file accordingly") return } _node = nodeBuilder().client(true).node if (null == _node) return _client = _node.client } def shutdown() { if (null != _node) _node.close(); _client = null } def enabled:Boolean = "true" == (Props.get("search.enabled") openOr "false") def connected:Boolean = null != _client def index(indexName:String, typeName:String, identifier:String, json:JValue) { if (null == indexName) throw new Exception("invalid (null) index") if (null == typeName) throw new Exception("invalid (null) type") if (null == identifier) throw new Exception("invalid (null) identifier") if (null == json) throw new Exception("invalid (null) json") if (!enabled) return confirmConnection val request:IndexRequestBuilder = _client.prepareIndex(indexName, typeName, identifier) request.setSource(pretty(render(json))) val response:IndexResponse = request.execute().actionGet() } def delete(indexName:String, typeName:String, identifier:String) { if (null == indexName) throw new Exception("invalid (null) index") if (null == typeName) throw new Exception("invalid (null) type") if (null == identifier) throw new Exception("invalid (null) identifier") if (!enabled) return confirmConnection val request:DeleteRequestBuilder = _client.prepareDelete(indexName, typeName, identifier) val response:DeleteResponse = request.execute().actionGet() } def search(indexName:String, query:XContentQueryBuilder, from:Integer, size:Integer, explain:Boolean=false):SearchHits = { if (null == indexName) throw new Exception("invalid (null) index") if (null == query) throw new Exception("invalid (null) query") if (!enabled) return null confirmConnection val request:SearchRequestBuilder = _client.prepareSearch(indexName) request.setSearchType(SearchType.QUERY_THEN_FETCH) request.setQuery(query) request.setFrom(from.intValue) request.setSize(size.intValue) request.setExplain(explain) val response:SearchResponse = request.execute().actionGet() return response.hits } private def confirmConnection { if (!connected) startup if (!connected) throw new Exception("cannot connect to search engine, perhaps it needs to be disabled?") } }
Naturally, the next step is to add a call into "SearchEngine.startup" from your Boot.scala so the app connects to the search server on startup.
Next define a pair of model & companion traits to hide the integration details from the domain objects:
trait SearchableModelMeta[T <: SearchableModel[T]] extends BaseModelMeta[T] { self: T with SearchableModelMeta[T] with BaseModelMeta[T] => private def searchIndexName:String = this.pluralXmlName private def searchTypeName:String = this.xmlName override def beforeSave = updateSearchIndexDate _ :: super.beforeSave override def afterSave = storeInSearchIndex _ :: super.afterSave override def afterDelete = deleteFromSearchIndex _ :: super.afterSave def reindexAll() { findAll.foreach((instance:T) => { storeInSearchIndex(instance) }) } def search(query:XContentQueryBuilder, from:Integer=0, size:Integer=100):SearchHits = { SearchEngine.search(this.searchIndexName, query, from, size) } private def updateSearchIndexDate(instance:T) { instance.indexedAt(Helpers.now) } private def storeInSearchIndex(instance:T) { SearchEngine.index(this.searchIndexName, this.searchTypeName, instance.id.toString, instance.toJson("search")) } private def deleteFromSearchIndex(instance:T) { SearchEngine.delete(this.searchIndexName, this.searchTypeName, instance.id.toString) } } trait SearchableModel[T <: BaseModel[T]] extends BaseModel[T] { self: T => /* *** columns */ object indexedAt extends MappedDateTime(this.asInstanceOf[T]) { override def dbColumnName = "indexed_at" } }
Note this example uses a custom base model class, which is not super important to what I am trying to show here (it facilitates dynamic json assembly and naming conventions, but those are pretty straight forward). however, it is important to note that that base model mixes in LongKeyedMapper and IdPk. headers for the class & companion below.
trait BaseModelMeta[T <: BaseModel[T]] extends LongKeyedMetaMapper[T] trait BaseModel[T <: LongKeyedMapper[T]] extends LongKeyedMapper[T] with IdPK
Last mixin the SearchableMode traits with the domain model so you can do things like:
Employee.search(...)
This of course if a very basic example which uses a handful of the available features elasticsearch and lucene offer, however, this should give anyone trying to integrate elasticsearch with lift a good head start.
Tags:
elasticsearch,
lift,
liftweb,
lucene,
scala,
search engine
Thursday, November 4, 2010
ruby on rails style routes with lift
In my latest project i am using lift/scala for restful services. spending the last couple of years using ruby on rails for such, the first thing i was looking for was a convenient way to define my generic restful routes. here is what i cam up with:
Once the routes handler (above) is define, you need to register the service implementation(s) to the routes handler and register the routes handler to Lift's dispatch table, this is done in lift's boot sequence, typically defined in Boot.scala. for example:
Naturally, you can extend this rest handler to handle custom restful routes, using the serveJx or serve directives, this is explained in more details here
Please note that a bug in scala is preventing from bundling all those serveJx case statements together, thus, the DRYlessness.
object Routes extends RestHelper with Logger { // response builder implicit def cvt:JxCvtPF[ResponseItem] = { case (XmlSelect, response, request) => response.toXml case (JsonSelect, response, request) => response.toJson } private val PREFIX:String = "api" private var _services:mutable.ListMap[String, BaseService] = null def registerService(service:BaseService, alias:String=null) { if (null == _services) _services = new mutable.ListMap[String, BaseService]() val key:String = if (null != alias) alias else service.getClass.getName.split("\\.").toList.last.replace("Service$", "") _services += key -> service } /* *** standard restful routes */ serveJx { // GET /api/service/index.{xml|json} case Get(PREFIX :: StringValue(service) :: "index" :: Nil, _) => invokeApi(service, "index") } serveJx { // GET /api/service/1.{xml|json} case Get(PREFIX :: StringValue(service) :: LongValue(id) :: Nil, _) => invokeApi(service, "get", id) } serveJx { // GET /api/service/api.{xml|json} case Get(PREFIX :: StringValue(service) :: StringValue(api) :: Nil, _) => invokeApi(service, api) } serveJx { // GET /api/service/api/1.{xml|json} case Get(PREFIX :: StringValue(service) :: StringValue(api) :: LongValue(id) :: Nil, _) => invokeApi(service, api, id) } serveJx { // POST /api/service.{xml|json} case Post(PREFIX :: StringValue(service) :: Nil, _) => invokeApi(service, "create") } serveJx { // POST /api/service/api.{xml|json} case Post(PREFIX :: StringValue(service) :: StringValue(api) :: Nil, _) => invokeApi(service, api) } serveJx { // POST /api/service/api/1.{xml|json} case Post(PREFIX :: StringValue(service) :: StringValue(api) :: LongValue(id) :: Nil, _) => invokeApi(service, api, id) } serveJx { // PUT /api/service/1.{xml|json} case Put(PREFIX :: StringValue(service) :: LongValue(id) :: Nil, _) => invokeApi(service, "update", id) } serveJx { // DELETE /api/service/1.{xml|json} case Delete(PREFIX :: StringValue(service) :: LongValue(id) :: Nil, _) => invokeApi(service, "delete", id) } private def invokeApi(service:String, api:String, id:Long=0):Box[ResponseItem] = { val serviceName:String = StringHelpers.camelify(service) val methodName:String = StringHelpers.camelifyMethod(api) info("processing api " + service + ":" + api + " as " + serviceName + "[Service]:" + methodName) if (!_services.contains(serviceName)) return Full(Failed("unknown service " + service)) _services(serviceName) match { case serviceHandler:BaseService => { serviceHandler.getClass.getMethods.foreach((method:Method) => { if (methodName == method.getName) { val result:Object = if (0 == id) method.invoke(serviceHandler) else method.invoke(serviceHandler, id.asInstanceOf[AnyRef]) result match { case response:ResponseItem => return Full(response) case _ => return Empty } } }) return Full(Failed("unknown API " + service + ":" + api)) } case _ => Full(Failed("invalid API configuration, service is not a BaseService")) } } }
Once the routes handler (above) is define, you need to register the service implementation(s) to the routes handler and register the routes handler to Lift's dispatch table, this is done in lift's boot sequence, typically defined in Boot.scala. for example:
Routes.registerService(SessionService) Routes.registerService(UserService) . . . LiftRules.dispatch.append(Routes)
Naturally, you can extend this rest handler to handle custom restful routes, using the serveJx or serve directives, this is explained in more details here
Please note that a bug in scala is preventing from bundling all those serveJx case statements together, thus, the DRYlessness.
Tuesday, December 8, 2009
comparing ruby 1.9 performance on ec2
benchmarked using ruby's benchmarking suite. all machine are brand new fedora 8 basic instances, not running any additional services. all tests done on ruby 1.9.1p376 (2009-12-07 revision 26041) [i686-linux] compiled from source with --enable-shared flag. benchmarks performed twice on 2 separate instances.
results speak for themselves, don't forget to compare ec2 pricing ;)
results speak for themselves, don't forget to compare ec2 pricing ;)
small 32 bit | medium 32 bit | large 64 bit | ||||||||
test | run 1 | run 2 | avg | run 1 | run 2 | avg | run 1 | run 2 | avg | |
app_answer | 0.251 | 0.263 | 0.257 | 0.113 | 0.118 | 0.1155 | 0.087 | 0.111 | 0.099 | |
app_erb | 2.926 | 2.911 | 2.9185 | 1.2 | 1.24 | 1.22 | 1.178 | 1.184 | 1.181 | |
app_factorial | 1.425 | 1.435 | 1.43 | 0.573 | 0.574 | 0.5735 | 0.445 | 0.459 | 0.452 | |
app_fib | 2.935 | 3.071 | 3.003 | 1.248 | 1.291 | 1.2695 | 1.137 | 1.158 | 1.1475 | |
app_mandelbrot | 1.343 | 1.348 | 1.3455 | 0.566 | 0.569 | 0.5675 | 0.545 | 0.561 | 0.553 | |
app_pentomino | 99.62 | 99.227 | 99.4235 | 41.587 | 41.832 | 41.7095 | 38.587 | 38.976 | 38.7815 | |
app_raise | 3.155 | 3.087 | 3.121 | 1.309 | 1.326 | 1.3175 | 1.488 | 1.521 | 1.5045 | |
app_strconcat | 2.714 | 2.754 | 2.734 | 1.173 | 1.189 | 1.181 | 1.001 | 1.07 | 1.0355 | |
app_tak | 4.161 | 4.174 | 4.1675 | 1.743 | 1.756 | 1.7495 | 1.52 | 1.57 | 1.545 | |
app_tarai | 3.194 | 3.164 | 3.179 | 1.364 | 1.392 | 1.378 | 1.23 | 1.281 | 1.2555 | |
app_uri | 5.652 | 5.657 | 5.6545 | 2.372 | 2.381 | 2.3765 | 2.456 | 2.412 | 2.434 | |
io_file_create | 1.345 | 1.351 | 1.348 | 0.579 | 0.569 | 0.574 | 0.872 | 0.891 | 0.8815 | |
io_file_read | 1.379 | 1.372 | 1.3755 | 0.652 | 0.678 | 0.665 | 0.757 | 0.737 | 0.747 | |
io_file_write | 1.086 | 1.124 | 1.105 | 0.503 | 0.514 | 0.5085 | 0.394 | 0.413 | 0.4035 | |
loop_for | 8.137 | 8.393 | 8.265 | 3.485 | 3.685 | 3.585 | 2.924 | 2.924 | 2.924 | |
loop_generator | 2.993 | 3.061 | 3.027 | 1.301 | 1.305 | 1.303 | 1.274 | 1.278 | 1.276 | |
loop_times | 7.592 | 6.963 | 7.2775 | 3.423 | 2.857 | 3.14 | 2.614 | 2.657 | 2.6355 | |
loop_whileloop | 3.335 | 3.449 | 3.392 | 1.463 | 1.476 | 1.4695 | 1.067 | 1.059 | 1.063 | |
loop_whileloop2 | 0.7 | 0.725 | 0.7125 | 0.33 | 0.327 | 0.3285 | 0.23 | 0.227 | 0.2285 | |
so_ackermann | 3.488 | 3.431 | 3.4595 | 1.459 | 1.479 | 1.469 | 1.321 | 1.336 | 1.3285 | |
so_array | 8.03 | 8.002 | 8.016 | 3.445 | 3.477 | 3.461 | 3.015 | 3.077 | 3.046 | |
so_binary_trees | 2.21 | 2.157 | 2.1835 | 0.892 | 0.905 | 0.8985 | 0.853 | 0.864 | 0.8585 | |
so_concatenate | 2.426 | 2.334 | 2.38 | 1.004 | 1.026 | 1.015 | 0.862 | 0.86 | 0.861 | |
so_count_words | 1.534 | 1.583 | 1.5585 | 0.682 | 0.698 | 0.69 | 0.631 | 0.628 | 0.6295 | |
so_exception | 6.177 | 5.951 | 6.064 | 2.483 | 2.473 | 2.478 | 2.754 | 2.712 | 2.733 | |
so_fannkuch | 122.375 | 122.345 | 122.36 | 51.535 | 51.816 | 51.6755 | 54.612 | 55.241 | 54.9265 | |
so_fasta | 16.102 | 15.909 | 16.0055 | 6.739 | 6.653 | 6.696 | 6.712 | 6.567 | 6.6395 | |
so_k_nucleotide | 9.231 | 9.204 | 9.2175 | 4.021 | 3.972 | 3.9965 | 3.861 | 3.895 | 3.878 | |
so_lists | 1.851 | 1.996 | 1.9235 | 0.855 | 0.845 | 0.85 | 0.782 | 0.778 | 0.78 | |
so_mandelbrot | 49.539 | 49.33 | 49.4345 | 20.966 | 21.093 | 21.0295 | 19.441 | 19.679 | 19.56 | |
so_matrix | 2.18 | 2.234 | 2.207 | 0.945 | 0.978 | 0.9615 | 0.819 | 0.858 | 0.8385 | |
so_meteor_contest | 31.53 | 31.425 | 31.4775 | 13.338 | 13.479 | 13.4085 | 12.317 | 12.254 | 12.2855 | |
so_nbody | 43.097 | 42.141 | 42.619 | 18.029 | 18.12 | 18.0745 | 18.64 | 18.931 | 18.7855 | |
so_nested_loop | 6.564 | 6.656 | 6.61 | 2.668 | 2.796 | 2.732 | 2.219 | 2.275 | 2.247 | |
so_nsieve | 15.524 | 15.361 | 15.4425 | 6.54 | 6.718 | 6.629 | 6.526 | 6.544 | 6.535 | |
so_nsieve_bits | 20.059 | 20.106 | 20.0825 | 8.496 | 8.534 | 8.515 | 6.785 | 7.031 | 6.908 | |
so_object | 4.818 | 4.795 | 4.8065 | 2.066 | 2.068 | 2.067 | 1.844 | 1.825 | 1.8345 | |
so_partial_sums | 56.984 | 57.449 | 57.2165 | 24.446 | 24.164 | 24.305 | 24.598 | 24.707 | 24.6525 | |
so_pidigits | 9.494 | 9.554 | 9.524 | 3.944 | 3.925 | 3.9345 | 2.865 | 2.846 | 2.8555 | |
so_random | 2.318 | 2.349 | 2.3335 | 0.992 | 1.016 | 1.004 | 0.915 | 0.941 | 0.928 | |
so_reverse_complement | 26.634 | 31.414 | 29.024 | 11.043 | 11.656 | 11.3495 | 12.476 | 12.484 | 12.48 | |
so_sieve | 0.428 | 0.452 | 0.44 | 0.21 | 0.227 | 0.2185 | 0.135 | 0.169 | 0.152 | |
so_spectralnorm | 22.134 | 22.296 | 22.215 | 9.6 | 9.451 | 9.5255 | 8.536 | 8.462 | 8.499 | |
vm1_block* | 7.212 | 7.208 | 7.21 | 3.409 | 3.134 | 3.2715 | 2.769 | 2.819 | 2.794 | |
vm1_const* | 2.143 | 1.988 | 2.0655 | 1.296 | 0.872 | 1.084 | 0.661 | 0.672 | 0.6665 | |
vm1_ensure* | 0.385 | 0.368 | 0.3765 | 0.15 | 0.168 | 0.159 | 0.143 | 0.153 | 0.148 | |
vm1_ivar* | 7.196 | 7.12 | 7.158 | 2.989 | 3.004 | 2.9965 | 2.735 | 2.741 | 2.738 | |
vm1_ivar_set* | 7.063 | 6.819 | 6.941 | 2.803 | 2.974 | 2.8885 | 2.492 | 2.502 | 2.497 | |
vm1_length* | 3.93 | 3.694 | 3.812 | 1.638 | 1.678 | 1.658 | 1.124 | 1.159 | 1.1415 | |
vm1_neq* | 2.814 | 2.818 | 2.816 | 1.221 | 1.236 | 1.2285 | 0.887 | 0.888 | 0.8875 | |
vm1_not* | 1.531 | 1.526 | 1.5285 | 0.651 | 0.702 | 0.6765 | 0.519 | 0.547 | 0.533 | |
vm1_rescue* | 0.324 | 0.208 | 0.266 | 0.133 | 0.153 | 0.143 | 0.112 | 0.144 | 0.128 | |
vm1_simplereturn* | 4.372 | 5.532 | 4.952 | 1.929 | 2.352 | 2.1405 | 1.731 | 1.661 | 1.696 | |
vm1_swap* | 1.714 | 1.631 | 1.6725 | 0.73 | 0.724 | 0.727 | 0.484 | 0.522 | 0.503 | |
vm2_array* | 2.747 | 2.719 | 2.733 | 1.164 | 1.142 | 1.153 | 1.034 | 1.04 | 1.037 | |
vm2_case* | 0.591 | 0.6 | 0.5955 | 0.254 | 0.259 | 0.2565 | 0.256 | 0.275 | 0.2655 | |
vm2_eval* | 102.12 | 104.003 | 103.0615 | 42.756 | 42.905 | 42.8305 | 42.582 | 43.292 | 42.937 | |
vm2_method* | 6.839 | 7.739 | 7.289 | 2.926 | 2.914 | 2.92 | 2.515 | 2.531 | 2.523 | |
vm2_mutex* | 6.675 | 6.75 | 6.7125 | 2.827 | 2.857 | 2.842 | 2.435 | 2.381 | 2.408 | |
vm2_poly_method* | 10.517 | 10.217 | 10.367 | 4.385 | 4.578 | 4.4815 | 3.747 | 3.577 | 3.662 | |
vm2_poly_method_ov* | 0.905 | 0.821 | 0.863 | 0.357 | 0.408 | 0.3825 | 0.385 | 0.412 | 0.3985 | |
vm2_proc* | 2.923 | 2.917 | 2.92 | 1.31 | 1.219 | 1.2645 | 1.118 | 1.115 | 1.1165 | |
vm2_regexp* | 7.88 | 7.773 | 7.8265 | 3.438 | 3.377 | 3.4075 | 2.713 | 2.722 | 2.7175 | |
vm2_send* | 1.066 | 1.085 | 1.0755 | 0.463 | 0.463 | 0.463 | 0.427 | 0.428 | 0.4275 | |
vm2_super* | 1.866 | 1.87 | 1.868 | 0.788 | 0.869 | 0.8285 | 0.696 | 0.735 | 0.7155 | |
vm2_unif1* | 0.845 | 0.884 | 0.8645 | 0.41 | 0.415 | 0.4125 | 0.343 | 0.368 | 0.3555 | |
vm2_zsuper* | 1.937 | 2.042 | 1.9895 | 0.821 | 0.849 | 0.835 | 0.742 | 0.777 | 0.7595 | |
vm3_gc | 5.348 | 5.301 | 5.3245 | 2.234 | 2.251 | 2.2425 | 2.357 | 2.366 | 2.3615 | |
vm3_thread_create_join | 5.113 | 5.263 | 5.188 | 2.353 | 2.289 | 2.321 | 3.317 | 3.288 | 3.3025 | |
vm3_thread_mutex | 1.818 | 1.862 | 1.84 | 32.556 | 35.779 | 34.1675 | 4.661 | 6.225 | 5.443 |
Tags:
amazon,
ec2,
performance,
ruby 1.9
Subscribe to:
Posts (Atom)