Quantcast
Channel: Source @ Coveo
Viewing all 202 articles
Browse latest View live

Microservices and exception handling in Java with Feign and reflection

$
0
0

Exception handling across microservices can be tedious, let’s see how the Java reflection API can help us ease the pain!

Microservices architecture

When it comes to building a complex application in the cloud, microservices architecture is the newest and coolest kid in town. It has numerous advantages over the more traditional monolithic architecture such as :

  • modularization and isolation, which makes the development easier in a big team;
  • more efficient scaling of the critical paths of the application;
  • possibility to upgrade only a microservice at a time, making the deployments less risky and less prone to unexpected side effects;
  • technology independance : by exposing an API with a clearly defined contract with a set of common rules shared by all microservices, you don’t have to care which language or database is used by the microservice.

I could go on for a while on this, microservices are a great way to build applications in the cloud. There are lots of awesome OSS projects from our friends at Netflix and Spring that will help you doing this, from service discovery to mid-tier load balancing and dynamic configuration, there’s a library for most requirements you’ll have to meet. It’s also great to see Spring coming aboard with Spring Cloud collaborating and integrating some of the Netflix librairies into a very useful and simple library to use with your new or existing Spring application!

Caveats

It wouln’t be fair to avoid talking about the downsides of microservices as they do present some challenges and are not suited to everyone and every application out there. Splitting an application into microservices bring some additional concerns like :

  • complex configuration management : 10 microservices? 10 configuration profiles, 10 Logback configurations, etc. (using a centralized configuration server can help you on this though);
  • performance hit : you need to validate this token? No problem, just make a POST to this endpoint with the token in the body and you’ll get the response in no time! While this is true for most cases, the network overhead, serialization/deserialization process can become a bottleneck and you always have to be resilient for network outages or congestion;
  • Interacting with other microservices brings a lot of boilerplate code : whereas a single additional method to a class was needed in a monolithic architecture, in a microservices you need a resource implementing an API, a client, some authorization mechanism, exception handling, etc.

Dynamic exception handling using Feign and reflection

In a monolithic application, handling exceptions is a walk in the park. However, if something goes wrong during an inter-service call, most of the times you’ll want to propagate this exception or handle it gracefully. The problem is, you don’t get an exception from the client, you get an HTTP code and a body describing the error or you may get a generic exception depending on the client used.

For some of our applications at Coveo, we use Feign to build our clients across services. It allows us to easily build clients by just writing an interface with the parameters, the endpoint and the thrown exceptions like this :

interfaceGitHub{@RequestLine("GET /users/{user}/repos")List<Repo>getUserRepos(@Param("user")Stringuser)throwsUserDoesNotExistException;}

When using the client, you are able to easily decode errors using the ErrorDecoder interface with the received Response object when the HTTP code is not in the 200 range. Now, we only need a way to map the errors to the proper exception.

Required base exception

Most of our exceptions here at Coveo inherit from a base exception which defines a readable errorCode that is unique per exception :

publicabstractclassServiceExceptionextendsException{privateStringerrorCode;//Constructors omittedpublicStringgetErrorCode(){returnerrorCode;}}

This allows us to translate exceptions on the API into a RestException object with a consistent error code and message like this :

{"errorCode":"INVALID_TOKEN","message":"The provided token is invalid or expired."}

Using the errorCode as the key, we can use the reflection API of Java to build up a map of thrown exceptions at runtime and rethrow them like there was no inter-service call!

Using reflection to create a dynamic ErrorDecoder

Alright, let’s dive into the code. First, we need a little POJO to hold the information for instantiation :

publicclassThrownServiceExceptionDetails{privateClass<?extendsServiceException>clazz;privateConstructor<?extendsServiceException>emptyConstructor;privateConstructor<?extendsServiceException>messageConstructor;//getters and setters omitted}

Then, we use reflection to get the thrown exceptions from the client in the constructor by passing the Feign interface as a parameter :

publicclassFeignServiceExceptionErrorDecoderimplementsErrorDecoder{privatestaticfinalLoggerlogger=LoggerFactory.getLogger(FeignServiceExceptionErrorDecoder.class)privateClass<?>apiClass;privateMap<String,ThrownServiceExceptionDetails>exceptionsThrown=newHashMap<>();publicFeignServiceExceptionErrorDecoder(Class<?>apiClass)throwsException{this.apiClass=apiClass;for(Methodmethod:apiClass.getMethods()){if(method.getAnnotation(RequestLine.class)!=null){for(Class<?>clazz:method.getExceptionTypes()){if(ServiceException.class.isAssignableFrom(clazz)){if(Modifier.isAbstract(clazz.getModifiers())){extractServiceExceptionInfoFromSubClasses(clazz);}else{extractServiceExceptionInfo(clazz);}}else{logger.info("Exception '{}' declared thrown on interface '{}' doesn't inherit from 
                                     ServiceException, it will be skipped.",clazz.getName(),apiClass.getName())}}}}}

With the thrown exceptions in hand, knowing that they inherit from ServiceException, we extract the errorCode and the relevant constructors. It supports empty constructor and single String parameter constructor :

privatevoidextractServiceExceptionInfo(Class<?>clazz)throwsException{ServiceExceptionthrownException=null;Constructor<?>emptyConstructor=null;Constructor<?>messageConstructor=null;for(Constructor<?>constructor:clazz.getConstructors()){Class<?>[]parameters=constructor.getParameterTypes();if(parameters.length==0){emptyConstructor=constructor;thrownException=(ServiceException)constructor.newInstance();}elseif(parameters.length==1&&parameters[0].isAssignableFrom(String.class)){messageConstructor=constructor;thrownException=(ServiceException)constructor.newInstance(newString());}}if(thrownException!=null){exceptionsThrown.put(thrownException.getErrorCode(),newThrownServiceExceptionDetails().withClazz((Class<?extendsServiceException>)clazz).withEmptyConstructor((Constructor<?extendsServiceException>)emptyConstructor).withMessageConstructor((Constructor<?extendsServiceException>)messageConstructor));}else{logger.warn("Couldn't instantiate the exception '{}' for the interface '{}', it needs an empty or String 
                     only *public* constructor.",clazz.getName(),apiClass.getName());}}

Bonus feature, when the scanned exception is abstract, we use the Spring ClassPathScanningCandidateComponentProvider to get all the subclasses and add them to the map :

privatevoidextractServiceExceptionInfoFromSubClasses(Class<?>clazz)throwsException{Set<Class<?>>subClasses=getAllSubClasses(clazz);for(Class<?>subClass:subClasses){extractServiceExceptionInfo(subClass);}}privateSet<Class<?>>getAllSubClasses(Class<?>clazz)throwsClassNotFoundException{ClassPathScanningCandidateComponentProviderprovider=newClassPathScanningCandidateComponentProvider(false);provider.addIncludeFilter(newAssignableTypeFilter(clazz));Set<BeanDefinition>components=provider.findCandidateComponents("your/base/package/here");Set<Class<?>>subClasses=newHashSet<>();for(BeanDefinitioncomponent:components){subClasses.add(Class.forName(component.getBeanClassName()));}returnsubClasses;}

Finally, we need to implement Feign ErrorDecoder. We deserialize the body into the RestException object who holds the message and the errorCode used to map to the proper exception :

@OverridepublicExceptiondecode(StringmethodKey,Responseresponse){privateJacksonDecoderjacksonDecoder=newJacksonDecoder();try{RestExceptionrestException=(RestException)jacksonDecoder.decode(response,RestException.class);if(restException!=null&&exceptionsThrown.containsKey(restException.getErrorCode())){returngetExceptionByReflection(restException);}}catch(IOExceptione){// Fail silently here, irrelevant as a new exception will be thrown anyway}catch(Exceptione){logger.error("Error instantiating the exception to be thrown for the interface '{}'",apiClass.getName(),e);}returndefaultDecode(methodKey,response,restException);//fallback not presented here}privateServiceExceptiongetExceptionByReflection(RestExceptionrestException)throwsException{ServiceExceptionexceptionToBeThrown=null;ThrownServiceExceptionDetailsexceptionDetails=exceptionsThrown.get(restException.getErrorCode());if(exceptionDetails.hasMessageConstructor()){exceptionToBeThrown=exceptionDetails.getMessageConstructor().newInstance(restException.getMessage());}else{exceptionToBeThrown=exceptionDetails.getEmptyConstructor().newInstance();}returnexceptionToBeThrown;}

Success!

Now that wasn’t so hard was it? By using this ErrorDecoder, all the exceptions declared thrown, even the subclasses of abstract base exceptions in our APIs, will get a chance to live by and get thrown on both sides of an inter-service call, with no specific treatment, just some reflection magic!

Hopefully this will come in handy for you, thanks for reading!


Indexing Only Relevant Parts of Sitecore Rendered Content

$
0
0

For website search, relevancy of the search results should be a priority. When indexing a Sitecore item with Coveo for Sitecore, you want as much information as possible to be indexed. That’s why you probably use the HtmlContentInBodyWithRequestsProcessor to index the Sitecore rendered HTML of the item. However, you don’t want to index global sections of the HTML like the header, footer, navigation, ads and sidebars.

A few solutions were available to do so. This post details a simple solution that involves only a Sitecore processor and minor edits to layouts, sublayouts or views.

Official Solution

The official documentation details a complex solution that requires:

  • A new device,
  • The duplication of most of your layouts,
  • The configuration of the presentation of all your rendering templates, and
  • A lot of attention not to forget updating the new device presentation when a change is made to the default device presentation.

This solution has the advantage to be easy to maintain by content editors and marketers.

Other Solutions

Other solutions are possible with the help of a developer:

  1. Modify the code of unwanted UI components to avoid rendering their content when detecting the Coveo HTTP request user agent.
  2. Code a custom processor to remove unwanted sections after the rendered HTML is fetched.

Introducing the CleanHtmlContentInBodyProcessor

The idea is to use harmless HTML comment elements around HTML markup you don’t want to be indexed. The processor is removing the comments and all the markup in between. The processor should run after the one fetching the HTML but before the item is sent to the index for indexing.

Code

The main code of the processor is fairly simple. It uses Regex to delete matched sections of the markup. The code currently assumes that the HTML is encoded in UTF-8 but can be easily adjusted for your integration. The complete code can be found in the Coveo Samples GitHub repository.

publicclassCleanHtmlContentInBodyProcessor:IProcessor<CoveoPostItemProcessingPipelineArgs>{publicstringStartCommentText{get;set;}publicstringEndCommentText{get;set;}publicvoidProcess(CoveoPostItemProcessingPipelineArgsp_Args){if(ShouldProcess(p_Args)){stringoriginalHtmlContent=Encoding.UTF8.GetString(p_Args.CoveoItem.BinaryData);stringcleanedHtmlContent=CleanHtmlContent(originalHtmlContent);p_Args.CoveoItem.BinaryData=Encoding.UTF8.GetBytes(cleanedHtmlContent);}}privatestringCleanHtmlContent(stringp_HtmlContent){returnRegex.Replace(p_HtmlContent,@"<!--\s*"+StartCommentText+@"\s*-->.*?<!--\s*"+EndCommentText+@"\s*-->","",RegexOptions.Singleline);}}

Usage

In the Configuration File

Add the processor node after your existing HTML fetching processor in your Coveo for Sitecore configuration file (Coveo.SearchProvider.config) or even better, in a patch file.

<configurationxmlns:patch="http://www.sitecore.net/xmlconfig/"><sitecore><pipelines><coveoPostItemProcessingPipeline><!-- Your existing HTML fetching processor --><processortype="Coveo.SearchProvider.Processors.HtmlContentInBodyWithRequestsProcessor, Coveo.SearchProviderBase"/><!-- The CleanHtmlContentInBodyProcessor processor --><processortype="Coveo.For.Sitecore.Samples.Processors.CleanHtmlContentInBodyProcessor, Coveo.For.Sitecore.Samples"><StartCommentText>BEGIN NOINDEX</StartCommentText><EndCommentText>END NOINDEX</EndCommentText></processor></coveoPostItemProcessingPipeline></pipelines></sitecore></configuration>

In the Layouts, Sublayouts and Views

Add comment elements in your layouts, sublayouts and views around the HTML markup you want to exclude from the indexed documents. Comments text need to match the processor configuration.

<body><!-- BEGIN NOINDEX --><header>...</header><!-- END NOINDEX --><divclass="main-content">...</div><!-- BEGIN NOINDEX --><footer>...</footer><!-- END NOINDEX --><script src="IncludedScript.js"></script><!-- BEGIN NOINDEX --><script src="ExcludedScript.js"></script><!-- END NOINDEX --></body>

Rebuild

After adding the processor and the comments, rebuild your Sitecore indexes managed by Coveo for Sitecore to index the cleaned HTML content.

Possible Issues

Be cautious with the comment locations. There are 2 possible problems:

  1. Removing a different number of start tags than end tags. This will make your HTML invalid and cause a lot of rendering problems.
  2. Nested comments. Avoid them as the code don’t support them. The content between the first start comment and the first end comment will be removed, leaving everything between the two end comments.

Conclusion

Whichever solution used, a Coveo for Sitecore integrator should always ensure the best search relevancy by indexing all but unwanted content. This ensures a great user experience and an increase of the key performance indicators for the customer.

Isomorphic TypeScript, fetch, promises, ava and coverage

$
0
0

Writing an API client in JavaScript is a lot of work, you have to write one for Node.js and one for the browser. I found out a way to have both on the same codebase with the same API, all that with only changes to the build scripts. It’s called isomorphic code, and doing it with modern TypeScript isn’t easy, but it’s achievable.

TypeScript brings lots of advantages to the JavaScript world with almost mandatory typings. But TypeScript code is transpiled, and to play well with other libraries that aren’t originally written in TypeScript needs manually written type definition and some hacks to play well with other external tools, like code coverage and test frameworks.

Isomorphic

Isomorphic is a trendy word with a nice soul behind, that means sharing some code between frontend and backend with minor or no changes. Since TypeScript can be compiled to JavaScript, it can run on Node.js and in the browser. An API client sharing the same code could be written with the same code everywhere.

I want my API client to fetch resources using the same simple call everywhere.

constclient=newcoveoanalytics.analytics.Client({token:'YOUR-TOKEN'})// Send your eventclient.sendCustomEvent({eventType:"dog";eventValue:"Hello! Yes! This is Dog!";});

All this without having 2 codebases.

Window, fetch and promises

Let’s fix the main difference between Node.js and the browser.

Getting data from the browser is done using an XMLHttpRequest or using the new fetch API that is defined on the global object window.

fetch('http://localhost:80/').then((res)=>{// Do stuff with the response})

In Node.js:

varhttp=require('http');http.get({hostname:'localhost',port:80,path:'/'},(res)=>{// Do stuff with response})

First things first, the fetch API is nice, simple and returns promises. But fetch isn’t defined in all browsers and is not even part of Node.js standard libraries. Promises aren’t defined in all browsers.

Fortunately there are nice libraries for both of these cases. Let’s use them.

npm install --save es6-promises isomorphic-fetch

But wait, don’t go too fast! You are in TypeScript you need the type definition if you don’t want to put the any type everywhere. Again in the console:

npm install --save-dev typings
typings install --save --ambient isomorphic-fetch es6-promise

Typings is a nice tool to find type definitions and it contains the type definition of most popular JavaScript library.

Now let’s handle the 2 cases, in the browser and in Node.js.

Node.js

Since fetch is defined on the global object and promises are natively implemented in Node.js. Just tell the people using your library to inject isomorphic-fetch in their Node.js application.

Compile using tsc with a tsconfig.json

{"compilerOptions":{"module":"commonjs","target":"es5","outDir":"dist","declaration":true,"noImplicitAny":true,"removeComments":true,"sourceMap":true},"files":["... your files""typings/main.d.ts"]}

With a Node.js entrypoint like this index.ts script:

import*asanalyticsfrom'./analytics';import*asSimpleAnalyticsfrom'./simpleanalytics';import*ashistoryfrom'./history';import*asdonottrackfrom'./donottrack';export{analytics,donottrack,history,SimpleAnalytics}

Then build it with tsc. If you don’t have it installed globally, you can use the npm bin executable $(npm bin)/tsc

Browser

The browser is a special case. Not everyone is using a web bundler, and I wanted to provide a library that could be bootstrapped like Google Analytics, so I needed my own bundle. When people don’t use a module bundler, you have to expose your library via a global object.

We’ll bundle our library with Webpack, and inject the promises and fetch libraries in it. We’ll also provide an entrypoint that will export variable to the global window object.

First the entrypoint:

import*asentrypointfrom'./index';global.ourlibentrypoint=entrypoint

Then the webpack configuration

npm install --save-dev webpack ts-loader exports-loader
varwebpack=require("webpack");module.exports={entry:"./src/browser.ts",output:{path:"./dist/",filename:"bundle.js"},devtool:'source-map',resolve:{extensions:['','.ts'],root:__dirname},module:{loaders:[{test:/\.ts$/,loader:'ts-loader'}]},plugins:[// The injection is done herenewwebpack.ProvidePlugin({'Promise':'es6-promise','fetch':'exports?self.fetch!whatwg-fetch'}),newwebpack.optimize.UglifyJsPlugin()],ts:{compilerOptions:{// We already emit declarations in our normal compilation step// not needed heredeclaration:false,}}}

Cook your bundle with webpack! The dist/bundle.js file can now be included in your html.

Tests

For sanity, let’s add tests to our library. We’ll use Ava from the prolific sindresorhus which is a modern testing library for JavaScript. Happily it comes with its own d.ts bundled so no need of typings for that one.

The setup is simple.

npm install --save-dev ava

A different tsconfig.json is needed for tests. So here is tsconfig.test.json:

{"compilerOptions":{"module":"commonjs","target":"es5","outDir":"dist_test","declaration":false,"noImplicitAny":true,"removeComments":true,//Inlinesourcemaparerequiredbynycthecoveragetool//tocorrectlymaptogoodfiles."inlineSourceMap":true},"files":["... your test files","test/lib.d.ts","typings/main.d.ts"]}

Some libs forgets type definitions. In my case I had to add a special lib.d.ts for tests.

test/lib.d.ts:

interfaceIteratorResult<T>{done:boolean;value?:T;}interfaceIterator<T>{next(value?:any):IteratorResult<T>;return?(value?:any):IteratorResult<T>;throw?(e?:any):IteratorResult<T>;}

To enable extended babel support in ava, you have to require babel-register in AVA. You can do this in the package.json file by adding an ava key.

"ava": {
  "require": [
    "babel-register" ] }

Tests can be run with tsc -p tsconfig.test.json && ava \"**/*test.js\"

Coverage

Adding coverage was simple, AVA runs tests in different process so you need to have a coverage runner that supports this. nyc does that task for you.

npm install --save-dev nyc

You’ll have to create a file which includes all your TypeScript files, so nyc and ava are aware of all the TypeScript available. I created a fake test that loads the Node.js entrypoint. That tests is always green.

importtestfrom'ava';import*ascoveoanalyticsfrom'../src/index';test('coverage',t=>{const_=coveoanalytics;});

It is also nice to get code coverage in the original languague, which is TypeScript. To do this you need to place the source maps inline. In your tsconfig.test.json add this key "compilerOptions"."inlineSourceMap": true.

You can then run your tests using tsc -p tsconfig.test.json && nyc ava \"**/*test.js\"

Plugging all this together.

If you followed the article without skipping part, you should be good to go, here’s a recap of the most important parts.

package.json:

{...//your2compiledentrypointshere"main":"dist/index.js","browser":"dist/bundle.js",..."scripts":{..."build:webpack":"webpack","build:tsc":"tsc","build":"npm run-script lint && npm run-script build:webpack && npm run-script build:tsc","test":"tsc -p tsconfig.test.json && nyc ava \"**/*test.js\"",...},..."dependencies":{..."isomorphic-fetch":"2.2.1",...},"devDependencies":{..."es6-promise":"3.1.2","ava":"0.14.0","exports-loader":"0.6.3","nyc":"6.4.4","TypeScript":"1.8.10","typings":"0.8.1","webpack":"1.13.0"...},..."ava":{"require":["babel-register"]}}

You also need:

  • 1 tsconfig file for your normal builds (Webpack and Node.js)
  • 1 tsconfig file for your tests
  • 1 typings file to have the type definitions of isomorphic-fetch and es6-promises
  • A lot of tests
  • 1 Browser entrypoint (mine is named browser.ts)
  • 1 Node entrypoint (mine is named index.ts)
  • A webpack.config.js file similar to the one above

This was a tedious work to glue everything together, but it was worth it. TypeScript is a nice transpiler bringing a lot to a large application’s codebase. It is up to date and even transpiles to ES2015 which you can then retranspile with babel if you want more included.

Opening a Sitecore Dialog from a Bookmarklet

$
0
0

When developing dialogs, wizards and applications in Sitecore, a developer would have to open them many times per day. When the action to open them requires more than one click, a lot of time will be lost.

Wouldn’t it be nice if one could open a dialog from the browser console command line or a bookmark?

Blitz - The story behind this year’s challenge

$
0
0

This year was the sixth edition of Coveo Blitz, our classic programming contest for students. The original purpose of the event is to find great, passionate developers and show them how fun, passionate, and driven our team is. We had the idea to step out of our comfort zone this year and focus on what we’ve learned in the last editions.

Template-ish method pattern using java 8

$
0
0

In the Usage Analytics service, there is a layer that validates if a user is allowed to perform the requested action. This should not be a surprise for anybody as all applications have some kind of security or permission check somewhere. Since the UA service is built in a layer architecture, that’s the job of the permission layer. The code is pretty boilerplate and very similar for all the different calls. It follows this logic :

  • Extract user identity and account from token
  • Check if the user has the required permissions
    • If he does, call the service layer
    • If he doesn’t, throw an exception

Typescript Dependency Injection and Decorators

$
0
0

In July 2015, Microsoft announced the release of Typescript 1.5, introducing decorators, based on the ES7 decorator proposal. I had to test it!

Microservices and exception handling in Java with Feign and reflection

$
0
0

Exception handling across microservices can be tedious, let’s see how the Java reflection API can help us ease the pain!


Indexing Only Relevant Parts of Sitecore Rendered Content

$
0
0

For website search, relevancy of the search results should be a priority. When indexing a Sitecore item with Coveo for Sitecore, you want as much information as possible to be indexed. That’s why you probably use the HtmlContentInBodyWithRequestsProcessor to index the Sitecore rendered HTML of the item. However, you don’t want to index global sections of the HTML like the header, footer, navigation, ads and sidebars.

A few solutions were available to do so. This post details a simple solution that involves only a Sitecore processor and minor edits to layouts, sublayouts or views.

Isomorphic TypeScript, fetch, promises, ava and coverage

$
0
0

Writing an API client in JavaScript is a lot of work, you have to write one for Node.js and one for the browser. I found out a way to have both on the same codebase with the same API, all that with only changes to the build scripts. It’s called isomorphic code, and doing it with modern TypeScript isn’t easy, but it’s achievable.

Adding support for 'require' in Nashorn

$
0
0

Some parts of Coveo’s query pipeline are extensible using JavaScript. We initially used DynJS, but since it’s now unmaintained, we had to switch to a new JS engine, namely Nashorn that comes out-of-the-box starting with Java 8. Nashorn works pretty well, but it’s missing built-in support for the require function that is used with CommonJS modules.

How to prevent frequent JavaScript mistakes

$
0
0

When writing JavaScript, I spend a lot of time fixing simple mistakes. Unlike compiled languages you are more likely to make mistakes. It is easy for syntax errors to sneak into your code without realizing it until you actually try and run your code.

How many times have I got an undefined variable because I refactored some code and forgot to rename that variable.

Even though it has been more than 5 years since I wrote my first Hello World. The feeling remains the same – Why did I make this mistake again ?

Of reading too many resumes

$
0
0

We have many interns right now at Coveo. For the summer, this process starts in February when we get over one hundred applications through multiple universities. All those applications had one thing in common, The Resume.

Over the years, I’ve read hundreds of them and have therefore accumulated a good list of what you should and shouldn’t do.

Using request objects with Feign

$
0
0

We recently decided to move our functional tests stack from python to Java, mainly to make coding them easier (our project’s backend is coded in Java) and thus increase the number of tests getting written. We needed a few things to make this possible and one of them was a complete and comprehensive Java client for the Usage Analytics API. Since a lot of the Java API clients we use internaly are built with Netflix’s Feign, I decided to give it a go.

Software Quality

$
0
0

When I try to code, I always ask myself what’s right and what’s wrong about software quality. Sometimes, those questions aren’t easy to answer, but as software developers, we must answer them. Over my short time (4 years) as a developer, I developed certain universal and basic interrogations. I found some by reading online and others by questioning myself. When answered correctly, they can give you a hint at the quality of a software.


Sitecore PowerShell Extension with Coveo

$
0
0

The excellent Sitecore PowerShell Extension allows you to return items from your index and display its properties in a friendly manner, all of this at a much faster speed than using the Content Search API. This is, of course, just one function of that rich extension.

Open-Sourcing the Coveo JavaScript Search Framework

$
0
0

In July 2016, the Coveo Search UI, also known as the Coveo JavaScript Search Framework, became open-source. This means that, from now on, anyone will be able to go on GitHub, take the Coveo Search UI, and modify the code itself to adapt it to their own needs.

Coveo's upcoming 'indexless' offering

$
0
0

The Coveo R&D delegation just came back from Dreamforce in San Francisco and we had a fantastic week. Coveo sends a pretty large contingent every year, which includes part of our teams working directly or indirectly on our Coveo for Salesforce product.

One thing of great importance to me is that we finally got to announce our upcoming freemium offering, which will allow our customers and partners to use Coveo’s advanced UIs, Usage Analytics, and machine learning based ranking at a very low price (even for free, in some cases!). I’ve been working on this project on and off for almost a year now (starting from a late night prototype), and it has since grown into a full product. I can’t wait to see people using this in the field.

Push API Basics with Java Examples

$
0
0
The Coveo Cloud Push API is a must-have feature that allows Coveo Cloud to index on-premise content management systems, including metadata and security permissions. While most content management systems include a built-in search engine, they are often underpowered, incapable of combining content from multiple repositories, and lack advanced features like Coveo's Usage Analytics and machine learning based Reveal.

Coveo for Sitecore V4 Cloud - The Road to Production

$
0
0

Coveo for Sitecore 4.0 was released this spring and allowed Coveo for Sitecore users to move their index to the cloud, reducing maintenance effort and opening the way for the advanced cloud features, such as Reveal machine learning and the query pipelines. Integrating Coveo for Sitecore to your Sitecore solution is now easier than ever. You can download the package directly on the Coveo website and follow the installation wizard in Sitecore. This is great to try the product with a trial organization, but how do you manage a paid license? I received several questions in the past few months about environment setup and license management. In this blog post, I will try to clarify a few things.

Viewing all 202 articles
Browse latest View live