[background=Alli Wren needed a plan. She could not run the G.I.A, her company and keep up most of her other obligations something was needed. It needed to be able to handle things far better than a droid. It needed to be able to be an expert in Search and optimization, be able to use the correct search algorithm to determine what to do and find data effectively. It needed to have a giant brain, and able to traverse through networks. it needed to be controlled. So Alli wren smirked and got to work. She would need something that could also appear human, passing a Touring test would be a must.[/size]
[background= [/size]
[background=First she finally had a use for all the data she had been collected from the holonet since she had begun taking over the maintaining of the Holonet for the Republic and OP and even a tiny patch of the Sith. Especially when the Republic expanded like it did, Soon She had a plethora of holonet satellites and now had sites full of data. With all that data on the galaxy. Then the task was to store it, and prepare thousands if not millions of choices. Also to enable the Intelligence to be able to add choices as it seemed fit. Once streamlined the A.I wouldn't need such data banks, just a connection to the Holonet and processing power.[/size]
[background=It would need to deal with incomplete or uncertain information as well, a simple Bayesian network would be too general but a good start. The problem was that it had to be able to make a decision even if a 'answer' wasn't available, it would have to determine what it would do to respond. It would have to know how to alter it self as it grew from experience, yet not be able to edit it's safe locks preventing it from going rogue. [/size]
[background=The A.I would have to be of the classification of Strong superhuman or better. After all she needed it to be able to deal with all sorts of threats, and this was the security of the Republic they were talking about. If this worked she could build a lesser version that only assaulted other ships security. There would also have to be Logic.[/size]
[background=Yes, Alli thought as she debated on the type of logics she wanted to use. She knew many had made headway with First-Order logic and Propositional logic, and even Subjective logic but she needed the A.I to be able to reason. Perhaps she was barking up the wrong tree perhaps they needed to go the way of Statistical learning.[/size]
[background= [/size]
[background= [/size][background=Yes after all they had done a lot with machine learning and using Gaussian mixture models and Naive Bayes Classifiers and even decision tree’s. Yes though the math required for some of these practices were substantial and would need some intense processing power. There were Neural networks but Alli hadn’t done much study in that area of Artificial Intelligence. [/size]
[background= [/size][background=That had decided it, she first began with the hardware, building a super brain in her secret lab. It would house the first incarnation of Serenity. She took care to ensure it was not connected to any other networks. That was key, she did not want this to become a monster or a new problem like a certain attack on the world. Then began the complex part. She started work on a program, breaking it up into different layers, with one ultimate controlling class. It was the final decision maker, that when presented with anything would break it into variables and then analyze the problem or choice for the best solution. on any normal machine this would take far too long to be anywhere near the level of a human but with the algorithms and equipment that Baktoid had for its Droid Brains, it could do so with much tweaking. [/size]
[background=The program would have to be able to edit it’s own decision making capability as it deemed fit to be more efficient, she would not limit it to what she thought were good decision making methods, after all perhaps Serenity could revolutionize the world of Computer Science by coming up with its own methods for evaluating and deciding on things. Perhaps she could even use what she noticed was changed by the A.I in other applications, yes she thought as she built a logging program to monitor software changes by the A.I. [/size]
[background= [/size][background=Alli spent months with a team of experts writing the software that became Serenity, deciding how it would deal with situations, what it was permitted to do on its own and what it needed human permission for. She had to ensure that it would not become rogue and begin killing off those it served. That actually became the bigger problem, was how to prevent it from betraying organics or baktoid. [/size]
[background=Strangely it had be relatively easy to make a program that could think and grow as a person does. The tough part was to ensure that while it grew, you didn’t limit it but at the same time controlled it. They ended putting certain hard guidelines in, for instance that Baktoid and Alli Wren and her descendants in particular could never be harmed and were to be obeyed above all else. Second, forces seen as friendly were to be preserved when possible but there were acceptable loses, though what was deemed acceptable was left up for interpretation, Alli knew she was safe but worked with the Republic concerning their military doctrine, coming up with limits on what sort of loses were acceptable to achieve a goal. After all a heartless sentient could make decisions others did not have the heart to make. [/size]
[background=The amount of processing power the intelligence took at first was amazing. It was actually hard to maintain but then something even more amazing happened. Serenity began to optimize herself. It came up with better ways to write its own code, keeping the same ideas but improving on them. Alli logged them and after looking at them they seemed so obvious yet assumed so much that Alli didn’t even know, some of the changes requiring teams to devote towards understanding just how Serenity did it. Alli had done it, she had created a suitable replacement for her at the helm of the G.I.A. [/size]
[background= [/size]
[background=First she finally had a use for all the data she had been collected from the holonet since she had begun taking over the maintaining of the Holonet for the Republic and OP and even a tiny patch of the Sith. Especially when the Republic expanded like it did, Soon She had a plethora of holonet satellites and now had sites full of data. With all that data on the galaxy. Then the task was to store it, and prepare thousands if not millions of choices. Also to enable the Intelligence to be able to add choices as it seemed fit. Once streamlined the A.I wouldn't need such data banks, just a connection to the Holonet and processing power.[/size]
[background=It would need to deal with incomplete or uncertain information as well, a simple Bayesian network would be too general but a good start. The problem was that it had to be able to make a decision even if a 'answer' wasn't available, it would have to determine what it would do to respond. It would have to know how to alter it self as it grew from experience, yet not be able to edit it's safe locks preventing it from going rogue. [/size]
[background=The A.I would have to be of the classification of Strong superhuman or better. After all she needed it to be able to deal with all sorts of threats, and this was the security of the Republic they were talking about. If this worked she could build a lesser version that only assaulted other ships security. There would also have to be Logic.[/size]
[background=Yes, Alli thought as she debated on the type of logics she wanted to use. She knew many had made headway with First-Order logic and Propositional logic, and even Subjective logic but she needed the A.I to be able to reason. Perhaps she was barking up the wrong tree perhaps they needed to go the way of Statistical learning.[/size]
[background= [/size]
[background= [/size][background=Yes after all they had done a lot with machine learning and using Gaussian mixture models and Naive Bayes Classifiers and even decision tree’s. Yes though the math required for some of these practices were substantial and would need some intense processing power. There were Neural networks but Alli hadn’t done much study in that area of Artificial Intelligence. [/size]
[background= [/size][background=That had decided it, she first began with the hardware, building a super brain in her secret lab. It would house the first incarnation of Serenity. She took care to ensure it was not connected to any other networks. That was key, she did not want this to become a monster or a new problem like a certain attack on the world. Then began the complex part. She started work on a program, breaking it up into different layers, with one ultimate controlling class. It was the final decision maker, that when presented with anything would break it into variables and then analyze the problem or choice for the best solution. on any normal machine this would take far too long to be anywhere near the level of a human but with the algorithms and equipment that Baktoid had for its Droid Brains, it could do so with much tweaking. [/size]
[background=The program would have to be able to edit it’s own decision making capability as it deemed fit to be more efficient, she would not limit it to what she thought were good decision making methods, after all perhaps Serenity could revolutionize the world of Computer Science by coming up with its own methods for evaluating and deciding on things. Perhaps she could even use what she noticed was changed by the A.I in other applications, yes she thought as she built a logging program to monitor software changes by the A.I. [/size]
[background= [/size][background=Alli spent months with a team of experts writing the software that became Serenity, deciding how it would deal with situations, what it was permitted to do on its own and what it needed human permission for. She had to ensure that it would not become rogue and begin killing off those it served. That actually became the bigger problem, was how to prevent it from betraying organics or baktoid. [/size]
[background=Strangely it had be relatively easy to make a program that could think and grow as a person does. The tough part was to ensure that while it grew, you didn’t limit it but at the same time controlled it. They ended putting certain hard guidelines in, for instance that Baktoid and Alli Wren and her descendants in particular could never be harmed and were to be obeyed above all else. Second, forces seen as friendly were to be preserved when possible but there were acceptable loses, though what was deemed acceptable was left up for interpretation, Alli knew she was safe but worked with the Republic concerning their military doctrine, coming up with limits on what sort of loses were acceptable to achieve a goal. After all a heartless sentient could make decisions others did not have the heart to make. [/size]
[background=The amount of processing power the intelligence took at first was amazing. It was actually hard to maintain but then something even more amazing happened. Serenity began to optimize herself. It came up with better ways to write its own code, keeping the same ideas but improving on them. Alli logged them and after looking at them they seemed so obvious yet assumed so much that Alli didn’t even know, some of the changes requiring teams to devote towards understanding just how Serenity did it. Alli had done it, she had created a suitable replacement for her at the helm of the G.I.A. [/size]